text
stringlengths
0
316k
year
stringclasses
50 values
No
stringclasses
911 values
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1660–1670, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Who caught a cold? — Identifying the subject of a symptom Shin Kanouchi†, Mamoru Komachi†, Naoaki Okazaki‡, Eiji Aramaki§, and Hiroshi Ishikawa† †Tokyo Metropolitan University, {kanouchi-shin at ed., komachi at, ishikawh at}tmu.ac.jp ‡Tohoku University, okazaki at ecei.tohoku.ac.jp §Kyoto University, eiji.aramaki at gmail.com Abstract The development and proliferation of social media services has led to the emergence of new approaches for surveying the population and addressing social issues. One popular application of social media data is health surveillance, e.g., predicting the outbreak of an epidemic by recognizing diseases and symptoms from text messages posted on social media platforms. In this paper, we propose a novel task that is crucial and generic from the viewpoint of health surveillance: estimating a subject (carrier) of a disease or symptom mentioned in a Japanese tweet. By designing an annotation guideline for labeling the subject of a disease/symptom in a tweet, we perform annotations on an existing corpus for public surveillance. In addition, we present a supervised approach for predicting the subject of a disease/symptom. The results of our experiments demonstrate the impact of subject identification on the effective detection of an episode of a disease/symptom. Moreover, the results suggest that our task is independent of the type of disease/symptom. 1 Introduction Social media services, including Twitter and Facebook, provide opportunities for individuals to share their experiences, thoughts, and opinions. The wide use of social media services has led to the emergence of new approaches for surveying the population and addressing social issues. One popular application of social media data is flu surveillance, i.e., predicting the outbreak of influenza epidemics by detecting mentions of flu infections on social media platforms (Culotta, 2010; Lampos and Cristianini, 2010; Aramaki et al., 2011; Paul and Dredze, 2011; Signorini et al., 2011; Collier, 2012; Dredze et al., 2013; Gesualdo et al., 2013; Stoové and Pedrana, 2014). Previous studies mainly relied on shallow textual clues in Twitter posts in order to predict the number of flu infections, e.g., the number of occurrences of specific keywords (such as “flu” or “influenza”) on Twitter. However, such a simple approach can lead to incorrect predictions. Broniatowski et al. (2013) argued that media attention increases chatter, i.e., the number of tweets that mention the flu without the poster being actually infected. Examples include, “I don’t wish the flu on anyone” and “A Harry Potter actor hospitalised after severe flu-like syndromes.” Lazer et al. (2014) reported large errors in Google Flu Trends (Carneiro and Mylonakis, 2009) on the basis of a comparison with the proportion of doctor visits for influenza-like illnesses. Lamb et al. (2013) aimed to improve the accuracy of detecting mentions of flu infections. Their method trains a binary classifier to distinguish tweets reporting flu infections from those expressing concern or awareness about the flu, e.g., “Starting to get worried about swine flu.” Accordingly, they reported encouraging results (e.g., better correlations with CDC trends), but their approach requires supervision data and a lexicon (word class features) specially designed for the flu. Moreover, even though this method is a reasonable choice for improving the accuracy, it is not readily applicable to other types of diseases (e.g., dengue fever) and symptoms (e.g., runny nose), which are also important for public health (Velardi et al., 2014). In this paper, we propose a more generalized task setting for public surveillance. In other words, our objective is to estimate the subject (carrier) of a disease or symptom mentioned in a Japanese tweet. More specifically, we are interested in determining who has a disease/symptom 1660 (if any) in order to examine whether the poster suffers from the disease or symptom. For example, given the sentence “I caught a cold,” we would predict that the first person (“I,” i.e., the poster) is the subject (carrier) of the cold. On the other hand, we can ignore the sentence, “The TV presenter caught a cold” only if we predict that the subject of the cold is the third person, who is at a different location from the poster. Although the task setting is simple and intuitive, we identify several key challenges in this study. 1. Novel task setting. The task of identifying the subject of a disease/symptom is similar to predicate-argument structure (PAS) analysis for nominal predicates (Meyers et al., 2004; Sasano et al., 2004; Komachi et al., 2007; Gerber and Chai, 2010). However, these studies do not treat diseases (e.g., “influenza”) and symptoms (e.g., “headache”) as nominal predicates. To the best of our knowledge, this task has not been explored in natural language processing (NLP) thus far. 2. Identifying whether the subject has a disease/symptom. Besides the work on PAS analysis for nominal predicates, the most relevant work is PAS analysis for verb predicates. However, our task is not as simple as predicting the subject of the verb governing a disease/symptom-related noun. For example, the subject of the verb “beat” is the first person “I” in the sentence “I beat the flu,” but this does not imply that the poster has the flu. At the same time, we can use a variety of expressions for indicating an infection, e.g., “I’m still sick!! This flu is just incredible...,” “I can feel the flu bug in me,” and “I tested positive for the flu.” 3. Omitted subjects. We often come across tweets with omitted subjects, e.g., “Down with the flu feel” and “Thanks the flu for striking in hard this week” even in English tweets. Because the first person is omitted frequently, it is important to predict omitted subjects from the viewpoint of the application (public surveillance). In this paper, we present an approach for identifying the subjects of various types of diseases and symptoms. The contributions of this paper are three-fold. 1. In order to explore a novel and general task setting, we design an annotation guideline for labeling a subject of a disease/symptom in a tweet, and we deliver annotations in an existing corpus for public surveillance. Further, we propose a method for predicting the subject of a disease/symptom by using the annotated corpus. 2. The experimental results show that the task of identifying subjects is independent of the type of diseases/symptom. We verify the possibility of transferring supervision data to different targets of diseases and symptoms. In other words, we verify that it is possible to utilize the supervision data for a particular disease/symptom to improve the accuracy of predicting subjects of another disease/symptom. 3. In addition, the experimental results demonstrate the impact of identifying subjects on improving the accuracy of the downstream application (identification of an episode of a disease/symptom). The remainder of this paper is organized as follows. Section 2 describes the corpus used in this study as well as our annotation work for identifying subjects of diseases and symptoms. Section 3.1 presents our method for predicting subjects on the basis of the annotated corpus. Sections 3.2 and 3.3 report the performance of the proposed method. Section 3.4 describes the contributions of this study toward identifying episodes of diseases and symptoms. Section 4 reviews some related studies. Finally, Section 5 summarizes our findings and concludes the paper with a brief discussion on the scope for future work. 2 Corpus 2.1 Target corpus We used a Japanese corpus for public surveillance of diseases and symptoms (Aramaki et al., 2011). The corpus targets seven types of diseases and symptoms: cold, cough, headache, chill, runny nose, fever, and sore throat. Tweets containing keywords for each disease/symptom were collected using the Twitter Search API: for example, tweets about sore throat were collected using the query “(sore OR pain) AND throat”. Further, 1661 Figure 1: Examples of annotations of subject labels. Subject label Definition Example FIRSTPERSON The subject of the disease/symptom is the poster of the tweet. I wish I have fever or something so that I don’t have to go to school. NEARBYPERSON The subject of the disease/symptom is a person whom the poster can directly see or hear. my sister continues to have a high fever... FARAWAYPERSON The subject of the disease/symptom is a person who is at a different location from the poster. @***** does sour stuff give you a headache? NONHUMAN The subject of the disease/symptom is not a person. Alternatively, the sentence does not describe a disease/symptom but a phenomenon or event related to the disease/symptom. My room is so chill. But I like it. NONE The subject of the disease/symptom does not exist. Alternatively, the sentence does not mention an occurrence of a disease/symptom. I hate buyin cold medicine cuz I never know which one to buy Table 1: Definitions of subject labels and example tweets. the corpus consists of 1,000 tweets for each disease/symptom besides cold, and 5,000 tweets for cold. The corpus was collected through whole years 2007-2008. This period was not in the A/H1N1 flu pandemic season. An instance in this corpus consists of a tweet text (in Japanese) and a binary label (episode label, hereafter) indicating whether someone near the poster has the target disease/symptom1. A positive episode indicates an occurrence of the disease/symptom. In this study, we disregarded instances of sore throat in the experiments because most such instances were positive episodes2. 1This label is positive if someone mentioned in the tweet is in the same prefecture as the poster. This is because the corpus was designed to survey the spread of a disease/symptom in every prefecture. 2In Japanese tweets, sore throat or throat pain mostly describes the health condition of the poster. 2.2 Annotating subjects In this study, we annotated the subjects of diseases and symptoms in the corpus described in Section 2.1. Specifically, we annotated the subjects in 500 tweets for each disease/symptom (except for sore throat). Thus, our corpus includes a total of 3,000 tweets in which the subjects of diseases and symptoms are annotated. Figure 1 shows examples of annotations in this study. Episode labels, tweet texts, and disease/symptom keywords were annotated by Aramaki et al. (2011) in the corpus. We annotated the subject labels of the diseases/symptoms in each tweet and identified those who had the target disease/symptom. The subject labels indicate those who have the corresponding disease/symptom; they are described in detail 1662 Label FIRSTPERSON NEARBYPERSON FARAWAYPERSON NONHUMAN NONE Total # tweets 2,153 129 201 40 401 2,924 # explicit subjects 70 (3.3%) 112 (86.8%) 175 (87.1%) 38 (95.0%) 0 (0.0%) 395 # positive episodes 1,833 99 2 0 16 1,950 # negative episodes 320 30 199 40 385 974 Positive ratio 85.1% 76.7% 1.0% 0.0% 4.0% 66.7% Table 2: Associations between subject labels and positive/negative episodes of diseases and symptoms. herein. In addition to the subject labels, we annotated the text span that indicates a subject. However, the subjects of diseases/symptoms are often omitted in tweet texts. Example 3 in Figure 1 shows a case in which the subject is omitted. The information as to whether the subject is omitted is useful for analyzing the difficulty in predicting the subject of a disease/symptom. Table 1 lists the definitions of the subject labels with tweeted examples. Because it is important to distinguish the primary information (information that is observed and experienced by the poster) from the secondary information (information that is broadcasted by the media) for the application of public surveillance, we introduced five labels: FIRSTPERSON, NEARBYPERSON, FARAWAYPERSON, NONHUMAN, and NONE. FIRSTPERSON is assigned when the subject of the disease/symptom is the poster of the tweet. When annotating this label, we ignore the modality or factuality of the event of acquiring the disease/symptom. For example, the example tweet corresponding to FIRSTPERSON in Table 1 does not state that the poster has a fever but only that the poster has a desire to have a fever. Although such tweets may be inappropriate for identifying a disease/symptom, this study focuses on identifying the possessive relation between a subject and a disease/symptom. The concept underlying this decision is to divide the task of public surveillance into several sub-tasks that are sufficiently generalized for use in other NLP applications. Therefore, the task of analyzing the modality lies beyond of scope of this study (Kitagawa et al., ). We apply the same criterion to the labels NEARBYPERSON, FARAWAYPERSON, and NONHUMAN. NEARBYPERSON is assigned when the subject of the disease/symptom is a person whom the poster can directly see or hear. In the original corpus (Aramaki et al., 2011), a tweet is labeled as positive if the person having a disease/symptom is in the same prefecture as the poster. However, it is extremely difficult for annotators to judge from a tweet whether the person mentioned in the tweet is in the same prefecture as the poster. Nevertheless, we would like to determine from a tweet whether the poster can directly see or hear a patient. For these reasons, we introduced the label NEARBYPERSON in this study. FARAWAYPERSON applies to all cases in which the subject is a human, but not classified as FIRSTPERSON or NEARBYPERSON. This category frequently includes tweeted replies, as in the case of the example corresponding to FARAWAYPERSON in Table 1. We assign FARAWAYPERSON to such sentences because we are unsure whether the subject of the symptom is a person whom the poster can physically see or hear. NONHUMAN applies to cases in which the subject is not a human but an object or a concept. For example, a sentence with the phrase “My room is so chill” is annotated with this label. NONE indicates that the sentence does not mention a target disease or symptom even though it includes a keyword for the disease/symptom. In order to investigate the inter-annotator agreement, we sampled 100 tweets of cold at random, and examined the Cohen’s κ statistic by two annotators. The κ statistic is 0.83, indicating a high level agreement (Carletta, 1996). Table 2 reports the distribution of subject labels in the corpus annotated in this study. When the subject of a disease/symptom is FIRSTPERSON, only 3.3% of the tweets have explicit textual clues for the first person3. In other words, when the subject of a disease/symptom is FIRSTPERSON, we rarely find textual clues in tweets. In contrast, there is a greater likelihood of finding explicit clues for NEARBYPERSON, FARAWAYPERSON, and NONHUMAN subjects. Table 2 also lists the probability of positive episodes given a subject label, i.e., the positive ratio. The likelihood of a positive episode 3This ratio may appear to be extremely low, but it is very common to omit first person pronouns in Japanese sentences. 1663 is extremely high when the subject label of a disease/symptom is FIRSTPERSON (85.1%) or NEARBYPERSON (76.7%). In contrast, FARAWAYPERSON, NONHUMAN, and NONE subjects represent negative episodes (less than 5.0%). These facts suggest that identifying subject labels can improve the accuracy of predicting patient labels for diseases and symptoms. 3 Experiment 3.1 Subject classifier We built a classifier to predict a subject label for a disease/symptom mentioned in a sentence by using the corpus described in the previous section. In our experiment, we merged training instances having the label NONHUMAN with those having the label NONE because the number of NONHUMAN instances was small and we did not need to distinguish the label NONHUMAN from the label NONE in the final episode detection task. Thus, the classifier was trained to choose a subject label from among FIRSTPERSON, NEARBYPERSON, FARAWAYPERSON, and NONE. We discarded instances in which multiple diseases or symptoms are mentioned in a tweet as well as those in which multiple subjects are associated with a disease/symptom in a tweet. In addition, we removed text spans corresponding to retweets, replies, and URLs; the existence of these spans was retained for firing features. We trained an L2regularized logistic regression model using Classias 1.14. The following features were used. Bag-of-Words (BoW). Nine words included before and after a disease/symptom keyword. We split a Japanese sentence into a sequence of words using a Japanese morphological analyzer, MeCab (ver.0.98) with IPADic (ver.2.7.0)5. Disease/symptom word (Keyword). The surface form of the disease/symptom keyword (e.g. “cold” and “headache”). 2,3-gram. Character-based bigrams and trigrams before and after the disease/symptom keyword within a window of six letters. URL. A boolean feature indicating whether the tweet includes a URL. 4http://www.chokkan.org/software/ classias/ 5http://taku910.github.io/mecab/ Feature Micro F1 Macro F1 BoW (baseline) 77.2 42.2 BoW + Keyword 81.9 53.6 BoW + 2,3-gram 79.1 46.1 BoW + URL 77.3 42.7 BoW + RT & reply 80.0 47.1 BoW + NearWord 77.6 46.8 BoW + FarWord 77.3 42.7 BoW + Title word 77.1 42.7 BoW + Tweet length 77.4 43.3 BoW + Is-head 77.6 43.5 All features 84.0 61.8 Table 3: Performance of the subject classifier. RT & reply. Boolean features indicating whether the tweet is a reply or a retweet. Word list for NEARBYPERSON (NearWord). A boolean feature indicating whether the tweet contains a word that is included in the lexicon for NEARBYPERSON. We manually collected words that may refer to a person who is near the poster, e.g., “girlfriend,” “sister,” and “staff.” The NearWord list includes 97 words. Word list for FARAWAYPERSON (FarWord). A boolean feature indicating whether the tweet contains a word that is included in the lexicon for FARAWAYPERSON. Similarly to the NearWord list, we manually collected 50 words (e.g., “infant”) for compiling this list. Title word. A boolean feature indicating whether the tweet contains a title word accompanied by a proper noun. The list of title words includes expressions such as “さん” and “くん” (roughly corresponding to “Ms” and “Mr”) that describe the title of a person. Tweet length. Three types of boolean features that fire when the tweet has less than 11 words, 11 to 30 words, and more than 30 words, respectively. Is-head. A boolean feature indicating whether the word following a disease/symptom keyword is a noun. In Japanese, when the word following a disease/symptom keyword is a noun, the disease/symptom keyword is unlikely to be the head of the noun phrase. 1664 Correct/predicted label FIRSTPERSON NEARBY. FARAWAY. NONE Total FIRSTPERSON 2,084 (−15) 6 (+1) 25 (+21) 38 (−7) 2,153 NEARBYPERSON 80 (−20) 41 (+29) 4 (−5) 4 (−4) 129 FARAWAYPERSON 88 (−49) 8 (+2) 89 (+46) 16 (+1) 201 NONE 174 (−158) 2 (+1) 10 (+4) 255 (+153) 441 Total predictions 2,426 (−237) 57 (+33) 128 (+66) 313 (+137) 2,924 Table 4: Confusion matrix between predicted and correct subject labels. 3.2 Evaluation of the subject classifier Table 3 reports the performance of the subject classifier measured via five-fold cross validation. We used 3,000 tweets corresponding to six types of diseases and symptoms for this experiment. The Bag-of-Words (BoW) feature achieved micro and macro F1 scores of 77.2 and 42.2, respectively. When all the features were used, the performance was boosted, i.e., micro and macro F1 scores of 84.0 and 61.8 were achieved. Features such as disease/symptom keywords, retweet & reply, and the lexicon for NEARBYPERSON were particularly effective in improving the performance. The surface form of the disease/symptom keyword was found to be the most effective feature in this task, the reasons for which are discussed in Section 3.3. A retweet or reply tweet provides evidence that the poster has interacted with another person. Such meta-linguistic features may facilitate semantic and discourse analysis in web texts. However, this feature is mainly limited to tweets. The lexicon for NEARBYPERSON provided an improvement of 4.6 points in terms of the macro F1 score. This is because (i) around 90% of the subjects for NEARBYPERSON were explicitly stated in the tweets and (ii) the vocabulary of people near the poster was limited. Table 4 shows the confusion matrix between the correct labels and the predicted labels. The diagonal elements (in bold face) represent the number of correct predictions. The figures in parentheses denote the number of instances for which the baseline feature set made incorrect predictions, but the full feature set made correct predictions. For example, the classifier predicted NEARBYPERSON subjects 48 times; 34 out of 48 predictions were correct. The full feature set increased the number of correct predictions by 22. From the diagonal elements (in bold face), we can confirm that the number of correct predictions increased significantly from the baseline case, except for FIRSTPERSON. One of the reasons for the improved accuracy of NONE prediction is the imbalanced label ratio of each disease/symptom. NONE accounts for 14% of the entire corpus, but only 5% of the runny nose corpus. On the other hand, NONE accounts for more than 30% of the chill corpus. The disease/symptom keyword feature adjusts the ratio of the subject labels for each disease/symptom, and the accuracy of subject identification is improved. As compared to the baseline case, the number of FIRSTPERSON cases that were predicted as FARAWAYPERSON increased. Such errors may be attributed to the reply feature. According to our annotation scheme, FARAWAYPERSON contains many reply tweets. Because the reply & retweet features make the second-largest contribution in our experiment, the subject classifier tends to output FARAWAYPERSON if the tweet is a reply. Table 5 summarizes the subject classification results comparing the case in which the subject of a disease/symptom exists in the tweet with that in which the subject does not exist. The prediction of FIRSTPERSON is not affected by the presence of the subject because FIRSTPERSON subjects are often omitted (especially in Japanese tweets). The prediction of NEARBYPERSON and FARAWAYPERSON is difficult if the subject is not stated explicitly. In contrast, it is easy to correctly predict NONE even though the subject is not expressed explicitly. This is because it is not easy to capture a variety of human-related subjects using Bag-of-Words, N-gram, or other simple features used in this experiment. 3.3 Dependency on diseases/symptoms The experiments described in Section 3.2 use training instances for all types of diseases and symptoms. However, each disease/symptom may have a set of special expressions for describing the state of an episode. For example, even though “catch a cold” is a common expression, we cannot 1665 Subject FIRSTPERSON NEARBYPERSON FARAWAYPERSON NONE # Explicit 66/69 (95.7%) 40/112 (35.7%) 79/174 (45.4%) 1/26 (3.8%) # Omitted 2,018/2,084 (96.8%) 1/17 (5.9%) 10/27 (37.0%) 254/415 (61.2%) # Total 2,084/2,153 (96.8%) 41/129 (31.8%) 89/201 (44.3%) 255/441 (57.8%) Table 5: Subject classification results comparing explicit subjects with omitted subjects. Figure 2: F1 scores for predicting subjects of cold with different types and sizes of training data. say “catch a fever” by combining the verb “catch” and the disease “fever.” The corpus developed in Section 2.2 can be considered as the supervision data for weighting linguistic patterns that connect diseases/symptoms with their subjects. This viewpoint raises another question: how strongly does the subject classifier depend on specific diseases and symptoms? In order to answer this question, we compare the performance of recognizing subjects of cold when using the training instances for all types of diseases and symptoms with that when using only the training instances for the target disease/symptom. Figure 2 shows the macro F1 scores with all training instances (dotted line) and with only cold training instances (solid line)6. In this case, training with cold instances is naturally more efficient than training with other types of diseases/symptoms. When trained with 400 instances only for cold, the classifier achieved an F1 score of 45.2. Moreover, we confirmed that adding training instances for other types of diseases/symptoms improved the F1 score: the max6For the solid line, we used 500 instances of “cold” as a test set, and we plotted the learning curve by increasing the number of training instances for other diseases/symptoms. For the dotted line, we fixed 100 instances for a test set, and we plotted the learning curve by increasing the number of training instances (100, 200, 300, and 400). Figure 3: Overall structure of the system. imum F1 score was 54.6 with 2,900 instances. These results indicate the possibility of building a subject classifier that is independent of specific diseases/symptoms but applicable to a variety of diseases/symptoms. We observed a similar tendency for other types of diseases/symptoms. 3.4 Contributions to the episode classifier The ultimate objective of this study is to detect outbreaks of epidemics by recognizing diseases and symptoms. In order to demonstrate the contributions of this study, we built an episode classifier that judges whether the poster or a person close to the poster suffers from a target disease/symptom. Figure 3 shows the overall structure of the system. Given a tweet, the system predicts the subject label for a disease/symptom, and integrates the predicted subject label as a feature for the episode classifier. In addition to the features used in Aramaki et al. (2011), we included binary features, each of which corresponds to a subject label predicted by the proposed method. We trained an L2regularized logistic regression model using Classias 1.1. Table 6 summarizes the performance of the episode classifier with different settings: without subject labels (baseline), with predicted subject la1666 Setting Cold Cough Headache Chill Runny nose Fever Macro F1 Baseline (BL) 84.4 88.5 90.8 75.9 89.2 78.1 84.5 BL + predicted subjects 85.0 88.3 90.7 81.4 89.4 80.2 85.8 BL + gold-standard subjects 87.7 92.6 93.5 88.5 91.4 88.6 90.4 Table 6: Performance of the episode classifier. bels , and with gold-standard subject labels. We measured the F1 scores via five-fold cross validation7. Further, we confirmed the contribution of subject label prediction, which achieved an improvement of 1.3 points over the baseline method (85.8 vs. 84.5). When using the gold-standard subject labels, the episode classifier achieved an improvement of 5.9 points. These results highlight the importance of recognizing a subject who has a disease/symptom using the episode classifier. Considering the F1 score for each disease/symptom, we observed the largest improvement for chill. This is because the Japanese word for “chill” has another meaning a cold air mass. When the word “chill” stands for a cold air mass in a tweet, the subject for “chill” is NONE. Therefore, the episode classifier can disambiguate the meaning of “chill’ on the basis of the subject labels. Similarly, the subject labels improved the performance for “fever”. In contrast, the subject labels did not improve the performance for headache and runny nose considerably. This is because the subjects for these symptoms are mostly FIRSTPERSON, as we seldom mention the symptoms of another person in such cases. In other words, the episode classifier can predict a positive label for these symptoms without knowing the subjects of these symptoms. 4 Related Work 4.1 Twitter and NLP NLP researchers have addressed two major directions for Twitter: adapting existing NLP technologies to noisy texts and extracting useful knowledge from Twitter. The former includes improving the accuracy of part-of-speech tagging (Gimpel et al., 2011) and named entity recognition (Plank et al., 2014), as well as normalizing ill-formed words into canonical forms (Han and Baldwin, 2011; Chrupała, 2014). Even though we did not incor7For the “predicted” setting, first, we predicted the subject labels in a similar manner to five-fold cross validation, and we used the predicted labels as features for the episode classifier. porate the findings of these studies, they could be beneficial to our work in the future. The latter has led to the development of several interesting applications besides health surveillance. These include prediction of future revenue (Asur and Huberman, 2010) and stock market trends (Si et al., 2013), mining of public opinion (O’Connor et al., 2010), event extraction and summarization (Sakaki et al., 2010; Thelwall et al., 2011; Marchetti-Bowick and Chambers, 2012; Shen et al., 2013; Li et al., 2014a), user profiling (Bergsma et al., 2013; Han et al., 2013; Li et al., 2014b; Zhou et al., 2014), disaster management (Varga et al., 2013), and extraction of common-sense knowledge (Williams and Katz, 2012). Our work can directly contribute to these applications, e.g., sentiment analysis, user profiling, event extraction, and disaster management. 4.2 Semantic analysis for nouns Our work can be considered as a semantic analysis that identifies an argument (subject) for a disease/symptom-related noun. NomBank (Meyers et al., 2004) provides annotations of noun arguments in a similar manner to PropBank (Palmer et al., 2005), which provides annotations of verbs. In NomBank, nominal predicates and their arguments are identified: for example, ARG0 (typically, subject or agent) is “customer” and ARG1 (typically, objects, patients, themes) is “issue” for the nominal predicate “complaints” in the sentence “There have been no customer complaints about that issue.” Gerber and Chai (2010) improved the coverage of NomBank by handling implicit arguments. Some studies have addressed the task of identifying implicit and omitted arguments for nominal predicates in Japanese (Komachi et al., 2007; Sasano et al., 2008). Our work shares a similar goal with the abovementioned studies, i.e., identifying an implicit ARG0 for a disease and symptom. However, these studies do not regard a disease/symptom as a nominal predicate because they consider verb nominalizations as nominal predicates. In addition, 1667 they use a corpus that consists of newswire text, the writing style and word usage of which differ considerably from those of tweets. For these reasons, we proposed a novel task setting for identifying subjects of diseases and symptoms, and we built an annotated corpus for developing the subject classifier and analyzing the challenges of this task. 5 Conclusion In this paper, we presented a novel approach to the identification of subjects of various types of diseases and symptoms. First, we constructed an annotated corpus based on an existing corpus for public surveillance. Then, we trained a classifier for predicting the subject of a disease/symptom. The results of our experiments showed that the task of identifying the subjects is independent of the type of disease/symptom. In addition, the results demonstrated the contributions of our work toward identifying an episode of a disease/symptom from a tweet. In the future, we plan to consider a greater variety of diseases and symptoms in order to develop applications for public health, e.g., monitoring the mental condition of individuals. Thus, we can not only improve the accuracy of subject identification but also enhance the generality of this task. Acknowledgments This study was partly supported by Japan Science and Technology Agency (JST). We are grateful to the anonymous referees for their constructive reviews. We are also grateful to Takayuki Sato and Yasunobu Asakura for their annotation efforts. This study was inspired by Project Next NLP8, a workshop for error analysis on various NLP tasks. We appreciate Takenobu Tokunaga, Satoshi Sekine, and Kentaro Inui for their helpful comments. References Eiji Aramaki, Sachiko Maskawa, and Mizuki Morita. 2011. Twitter catches the flu: Detecting influenza epidemics using twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1568–1576. Sitaram Asur and Bernardo A. Huberman. 2010. Predicting the future with social media. In Proceedings 8https://sites.google.com/site/ projectnextnlp/english-page of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology - Volume 01, WI-IAT ’10, pages 492–499, Washington, DC, USA. IEEE Computer Society. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. Broadly improving user classification via communication-based name and location clustering on Twitter. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1010–1019. David Broniatowski, Michael J. Paul, and Mark Dredze. 2013. National and local influenza surveillance through Twitter: An analysis of the 2012-2013 influenza epidemic. PLoS ONE, 8(12):e83672. Jean Carletta. 1996. Assessing agreement on classification tasks: the kappa statistic. Computational linguistics, 22(2):249–254. Herman Anthony Carneiro and Eleftherios Mylonakis. 2009. Google trends: a web-based tool for real-time surveillance of disease outbreaks. Clinical Infectious Diseases, 49(10):1557–1564. Grzegorz Chrupała. 2014. Normalizing tweets with edit scripts and recurrent neural embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 680–686. Nigel Collier. 2012. Uncovering text mining: a survey of current work on web-based epidemic intelligence. Global Public Health: An International Journal for Research, Policy and Practice, 7(7):731–749. Aron Culotta. 2010. Towards detecting influenza epidemics by analyzing Twitter messages. In Proceedings of the Workshop on Social Media Analytics (SOMA), pages 115–122. Mark Dredze, Michael J. Paul, Shane Bergsma, and Hieu Tran. 2013. Carmen: A Twitter geolocation system with applications to public health. In Proceedings of the AAAI Workshop on Expanding the Boundaries of Health Informatics Using AI (HIAI), pages 20–24. Matthew Gerber and Joyce Y. Chai. 2010. Beyond NomBank: A study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592. Francesco Gesualdo, Giovanni Stilo, Eleonora Agricola, Michaela V. Gonfiantini, Elisabetta Pandolfi, Paola Velardi, and Alberto E. Tozzi. 2013. Influenza-like illness surveillance on Twitter through automated learning of naïve language. PLoS One, 8(12):e82489. 1668 Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for Twitter: Annotation, features, and experiments. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 42–47. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 368–378. Bo Han, Paul Cook, and Timothy Baldwin. 2013. A stacking-based approach to Twitter user geolocation prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 7–12. Yoshiaki Kitagawa, Mamoru Komachi, Eiji Aramaki, Naoaki Okazaki, and Hiroshi Ishikawa. Disease event detection based on deep modality analysis. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP) 2015 Student Research Workshop. Mamoru Komachi, Ryu Iida, Kentaro Inui, and Yuji Matsumoto. 2007. Learning based argument structure analysis of event-nouns in Japanese. In Proceedings of the Conference of the Pacific Association for Computational Linguistics (PACLING), pages 120–128. Alex Lamb, Michael J. Paul, and Mark Dredze. 2013. Separating fact from fear: Tracking flu infections on Twitter. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 789–795. Vasileios Lampos and Nello Cristianini. 2010. Tracking the flu pandemic by monitoring the social web. In 2nd IAPR Workshop on Cognitive Information Processing (CIP 2010), pages 411–416. David Lazer, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. The parable of Google flu: Traps in big data analysis. Science, 343(6176):1203–1205. Jiwei Li, Alan Ritter, Claire Cardie, and Eduard Hovy. 2014a. Major life event extraction from Twitter based on congratulations/condolences speech acts. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1997–2007. Jiwei Li, Alan Ritter, and Eduard Hovy. 2014b. Weakly supervised user profile extraction from Twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 165–174. Micol Marchetti-Bowick and Nathanael Chambers. 2012. Learning for microblogs with distant supervision: Political forecasting with Twitter. In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, pages 603–612. Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. The NomBank Project: An interim report. In Proceedings of the NAACL/HLT Workshop on Frontiers in Corpus Annotation, pages 24–31. Brendan O’Connor, Ramnath Balasubramanyan, Bryan R. Routledge, , and Noah A. Smith. 2010. From tweets to polls: Linking text sentiment to public opinion time series. In Proceedings of the Fourth International AAAI Conference on Weblogs and Social Media (ICWSM), pages 122–129. Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The Proposition Bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71–106. Michael J. Paul and Mark Dredze. 2011. You are what you tweet: Analyzing Twitter for public health. In Proceedings of the Fifth International AAAI Conference on Weblogs and Social Media (ICWSM), pages 265–272. Barbara Plank, Dirk Hovy, Ryan McDonald, and Anders Søgaard. 2014. Adapting taggers to Twitter with not-so-distant supervision. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1783–1792. Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes Twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World Wide Web (WWW), pages 851–860. Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2004. Automatic construction of nominal case frames and its application to indirect anaphora resolution. In Proceedings of the 20th international conference on Computational Linguistics, pages 1201–1207. Ryohei Sasano, Daisuke Kawahara, and Sadao Kurohashi. 2008. A fully-lexicalized probabilistic model for Japanese zero anaphora resolution. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 769–776. Chao Shen, Fei Liu, Fuliang Weng, and Tao Li. 2013. A participant-based approach for event summarization using Twitter streams. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152–1162. 1669 Jianfeng Si, Arjun Mukherjee, Bing Liu, Qing Li, Huayi Li, and Xiaotie Deng. 2013. Exploiting topic based Twitter sentiment for stock prediction. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 24–29. Alessio Signorini, Alberto Maria Segre, and Philip M. Polgreen. 2011. The use of Twitter to track levels of disease activity and public concern in the U.S. during the influenza A H1N1 pandemic. PLoS ONE, 6(5):e19467. Mark A. Stoové and Alisa E. Pedrana. 2014. Making the most of a brave new world: Opportunities and considerations for using Twitter as a public health monitoring tool. Preventive Medicine, 63:109–111. Mike Thelwall, Kevan Buckley, and Georgios Paltoglou. 2011. Sentiment in Twitter events. Journal of the American Society for Information Science and Technology, 62(2):406–418. István Varga, Motoki Sano, Kentaro Torisawa, Chikara Hashimoto, Kiyonori Ohtake, Takao Kawai, JongHoon Oh, and Stijn De Saeger. 2013. Aid is out there: Looking for help from tweets during a large scale disaster. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1619– 1629. Paola Velardi, Giovanni Stilo, Alberto E. Tozzi, and Francesco Gesualdo. 2014. Twitter mining for finegrained syndromic surveillance. Artificial Intelligence in Medicine, 61(3):153–163. Jennifer Williams and Graham Katz. 2012. Extracting and modeling durations for habits and events from Twitter. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 223–227. Deyu Zhou, Liangyu Chen, and Yulan He. 2014. A simple bayesian modelling approach to event extraction from Twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 700– 705. 1670
2015
160
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1671–1680, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Weakly Supervised Role Identification in Teamwork Interactions Diyi Yang School of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213, USA [email protected] Miaomiao Wen School of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213, USA [email protected] Carolyn Penstein Ros´e School of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213, USA [email protected] Abstract In this paper, we model conversational roles in terms of distributions of turn level behaviors, including conversation acts and stylistic markers, as they occur over the whole interaction. This work presents a lightly supervised approach to inducing role definitions over sets of contributions within an extended interaction, where the supervision comes in the form of an outcome measure from the interaction. The identified role definitions enable a mapping from behavior profiles of each participant in an interaction to limited sized feature vectors that can be used effectively to predict the teamwork outcome. An empirical evaluation applied to two Massive Open Online Course (MOOCs) datasets demonstrates that this approach yields superior performance in learning representations for predicting the teamwork outcome over several baselines. 1 Introduction In language technologies research seeking to model conversational interactions, modeling approaches have aimed to identify conversation acts (Paul, 2012; Wallace et al., 2013; Bhatia et al., 2014) on a per turn basis, or to identify stances (Germesin and Wilson, 2009; Mukherjee et al., 2013; Piergallini et al., 2014; Hasan and Ng, 2014) that characterize the nature of a speaker’s orientation within an interaction over several turns. What neither of these two perspectives quite offer is a notion of a conversational role. And yet, conversational role is a concept with great utility in current real world applications where language technologies may be applied. Important teamwork is achieved through collaboration where discussion is an important medium for accomplishing work. For example, distributed work teams are becoming increasingly the norm in the business world where creating innovative products in the networked world is a common practice. This work requires the effective exchange of expertise and ideas. Open source and open collaboration organizations have successfully aggregated the efforts of millions of volunteers to produce complex artifacts such as GNU/Linux and Wikipedia. Discussion towards decision making about how to address problems that arise or how to extend work benefit from effective conversational interactions. With a growing interest in social learning in large online platforms such as Massive Open Online Courses (MOOCs), students form virtual study groups and teams to complete a course project, and thus may need to coordinate and accomplish the work through discussion. In all such environments, discussions serve a useful purpose, and thus the effectiveness of the interaction can be measured in terms of the quality of the resulting product. We present a modeling approach that leverages the concept of latent conversational roles as an intermediary between observed discussions and a measure of interaction success. While a stance identifies speakers in terms of their positioning with respect to one another, roles associate speakers with rights and responsibilities, associated with common practices exhibited by performers of that role within an interaction, towards some specific interaction outcome. That outcome may be achieved through strategies characterized in terms of conversation acts or language with particular stylistic characteristics. However, individual acts by themselves lack the power to achieve a complex outcome. We argue that roles make up for this decontextualized view of a conversational contribution by identifying distributions of conversation acts and stylistic features as behavior profiles indicative of conversational roles. These 1671 profiles have more explanatory power to identify strategies that lead to successful outcomes. In the remainder of the paper we first review related work that lays the foundation for our approach. Then we describe a series of role identification models. Experimental results are analyzed quantitatively and qualitatively in Section 4, followed by conclusions and future work. 2 Related Work The concept of social role has long been used in social science fields to describe the intersection of behavioral, symbolic, and structural attributes that emerge regularly in particular contexts. Theory on coordination in groups and organizations emphasizes role differentiation, division of labor and formal and informal management (Kittur and Kraut, 2010). However, identification of roles as such has not had a corresponding strong emphasis in the language technologies community, although there has been work on related notions. For example, there has been much previous work modeling disagreement and debate framed as stance classification (Thomas et al., 2006; Walker et al., 2012). Another similar line of work studies the identification of personas (Bamman et al., 2013; Bamman et al., 2014) in the context of a social network, e.g. celebrity, newbie, lurker, flamer, troll and ranter, etc, which evolve through user interaction (Forestier et al., 2012). What is similar between stances and personas on the one hand and roles on the other is that the unit of analysis is the person. On the other hand, they are distinct in that stances (e.g., liberal) and personas (e.g., lurker) are not typically defined in terms of what they are meant to accomplish, although they may be associated with kinds of things they do. Teamwork roles are defined in terms of what the role holder is meant to accomplish. The notion of a natural outcome associated with a role suggests a modeling approach utilizing the outcome as light supervision towards identification of the latent roles. However, representations of other notions such as stances or strategies can similarly be used to predict outcomes. Cadilhac et al. maps strategies based on verbal contributions of participants in a win-lose game into a prediction of exactly which players, if any, trade with each other (Cadilhac et al., 2013). Hu et al. (Hu et al., 2009) predict the outcome of featured article nominations based on user activeness, discussion consensus and user co-review relations. In other work, the authors of (Somasundaran and Wiebe, 2009) adopt manually annotated characters and leaders to predict which participants will achieve success in online debates. The difference is the interpretation of the latent constructs. The latent construct of a role, such as team leader, is defined in terms of a distribution of characteristics that describe how that role should ideally be carried out. However, in the case of stances, the latent constructs are learned in order to distinguish one stance from another or in order to predict who will win. This approach will not necessarily offer insight into what marks the most staunch proponents of a stance, but instead distinguish those proponents of a stance who are persuasive from those who are not. Roles need not only be identified with the substance of the text uttered by role holders. Previous work discovers roles in social networks based on the network structure (Hu and Liu, 2012; Zhao et al., 2013). Examples include such things as mixed membership stochastic blockmodels (MMSB) (Airoldi et al., 2008), similar unsupervised matrix factorization methods (Hu and Liu, 2012), or semi-supervised role inference models (Zhao et al., 2013). However, these approaches do not standardly utilize an outcome as supervision to guide the clustering. Many open questions exist about what team roles and in what balance would make the ideal group composition (Neuman et al., 1999), and how those findings interact with other contextual factors (Senior, 1997; Meredith Belbin, 2011). Thus, a modeling approach that can be applied to new contexts in order to identify roles that are particularly valuable given the context would potentially have high practical value. 3 Role Identification Models The context of this work is team based MOOCs using the NovoEd platform. In this context, we examine the interaction between team members as they work together to achieve instructional goals in their project work. Our modeling goal is to identify behavior profiles that describe the emergent roles that team members take up in order to work towards a successful group grade for their team project. Identification of effective role based behavior profiles would enable work towards supporting effective team formation in subsequent 1672 work. This approach would be similar to prior work where constraints that describe successful teams were used to group participants into teams in which each member’s expertise is modeled so that an appropriate mixture of expertise can be achieved in the assignment (Anagnostopoulos et al., 2010). In this section, we begin with an introduction of some basic notations. Then we present an iterative model, which involves two stages: teamwork quality prediction and student role matching. Furthermore, we generalize this model to a constrained version which provides more interpretable role assignments. In the end, we describe how to construct student behavior representations from their teamwork collaboration process. 3.1 Notation Suppose we have C teams where students collaborate to finish a course project together. The number of students in the j-th team is denoted as Nj, (1 ≤j ≤Nj). There are K roles across C teams that we want to identify, where 1 ≤K ≤ Nj, ∀j ∈[1, C]. That is, the number of roles is smaller than or equal to the number of students in a team, which means that each role should have one student assigned to it, but not every student needs to be assigned to a role. Each role is associated with a weight vector Wk ∈RD to be learned, 1 ≤k ≤K and D is the number of dimensions. Each student i in a team j is associated with a behavior vector Bj,i ∈RD. The measurement of teamwork quality is denoted as Qj for team j, and ˆQj is the predicted quality. Here, ˆQj is determined by the inner product of the behavior vectors of students who are assigned to different roles and the corresponding weight vectors. Teamwork Role Identification Our goal is to find a proper teamwork role assignment that positively contributes to the collaboration outcome as much as possible. 3.2 Role Identification Here we describe our role identification model. Our role identification process is iterative and involves two stages. The first stage adjusts the weight vectors to predict the teamwork quality, given a fixed role assignment that assumes students are well matched to roles; the second stage iterates the possible assignments and finds a matching to maximize our objective measure. The S1 S2 SN R1 R2 RK … … Weight(i,j) = Wk TBj,pj,k maximum weighted matching candidate edges Si i-th student in j-th team Rk the k-th role Weighted Bipartite Graph for j-th team Figure 1: Weighted Bipartite Graph for a Team two stages run iteratively until both role assignment and teamwork quality prediction converge. Teamwork Quality Prediction: Given the identified role assignment, i.e. we know who is assigned to which roles in a team, the focus is to accurately predict the teamwork quality under this role assignment. pj,k refers to the student who is assigned to role k in team j. We minimize the following objective function to update the role weight vector W: min W 1 2 C X j=1 (Qj− K X k=1 WkT ·Bj,pj,k)+λ·∥W∥2 (1) Here, λ is the regularization parameter; large λ leads to higher complexity penalization. To give the optimal solution to Equation 1, which is a classical ridge regression task (Hoerl and Kennard, 2000), we can easily compute the optimal solution by its closed form representation, as shown in the Algorithm 1. Matching Members to Roles: Once the weight vector W is updated, we iterate over all the possible assignments and find the best role assignment, where the goal is to maximize the predicted teamwork quality since we want our assignment of students and roles to be associated with improvement in the quality of teamwork. The complexity of brute-force enumeration of all possible role assignments is exponential. To avoid such an expensive computational cost, we design a weighted bipartite graph and apply a maximum weighted matching algorithm (Ravindra et al., 1993) to find the best matching under the objective of maximizing PC j=1 ˆQj. Because this objective is a summation, we can further separate it into C iso1673 Algorithm 1: Role Identification 1 Heuristicly initialize the role assignment pj,k 2 while assignments have not converged do // Teamwork Quality Prediction 3 X ←a C × (K · D) matrix 4 for j = 1 to C do 5 Xj,∗←(Bj,pj,1, Bj,pj,2, . . . , Bj,pj,K) // optimal solution to Eq. 1 6 (W1, . . . , WC) ←(XT X + λI)−1XT Q // Student and Role Matching // maximize P j ˆQj 7 for j = 1 to C do 8 (pj,∗) ←maximum weighted bipartite matching on Figure 1 lated components for C teams by maximizing each ˆQj. For each team, a weighted bipartite graph is created as specified in Figure 1. By applying the maximum weighted matching algorithm on this graph, we can obtain the best role assignment for each team. The two stage role identification model is solved in detail in Algorithm 1. 3.3 Role Identification with Constraints The above role identification model puts no constraints on the roles that we want to identify in teamwork. This might result in more effort to explain how different roles collaborate to produce the teamwork success. Therefore, we introduce a constrained role identification model, which is able to integrate external constraints on roles. For example, we can require our extracted role set to contain a role that makes a positive contribution to the project success and a role that contributes relatively negatively, instead of extracting several generic roles. To address such constraints, in the stage of teamwork quality prediction, we reformulate the Equation 2 as follows: L = 1 2 C X j=1 (Qj − K X k=1 WkT · Bj,pj,k) + λ∥W∥2 −µ+ X k∈S+ D X d=1 log(Wkd) −µ− X k∈S− D X d=1 log(−Wkd) (2) Algorithm 2: Identification with Constraints 1 Heuristicly initialize the role assignment pj,k 2 while assignments have not converged do // Teamwork Quality Prediction 3 X ←a C × (K · D) matrix 4 for j = 1 to C do 5 Xj ←(Bj,pj,1, Bj,pj,2, . . . , Bj,pj,K) // gradient descent solution to Eq. 2 6 µ+, µ−←large enough values 7 while µ+, µ−> ϵ do 8 while not converge do 9 for k = 1 to K do 10 Wk ←Wk −η · ∂L ∂Wk 11 µ+ ←θ · µ+ 12 µ−←θ · µ− // Students and Roles Matching // maximize P j ˆQj 13 for j = 1 to C do 14 (pj,∗) ←maximum weighted bipartite matching on Figure 1 The external constraints are handled by the log barrier terms (Ahuja et al., 1993). Here, µ+ and µ−are positive parameters used to penalize the violation of role constraints. S+ is the set of roles that we want to assign students who contribute positively to the group outcome (i.e. above average level), and S−contains the roles that we want to capture students who contribute negatively to the group outcome (i.e. below average level). The solving of Equation 2 cannot directly apply the previous ridge regression algorithm, thus we use the Interior Point Method (Potra and Wright, 2000) to solve it. The detailed procedure is illustrated in Algorithm 2, where the θ is a constant to control the shrinkage and η is the learning rate. 3.4 Behavior Construction One essential component in our teamwork role identification models is the student behavior representation. To some extent, a proper behavior representation is essential for facilitating the interpretation of identified roles. We construct the representation of student behavior from the following feature types: Team Member Behaviors: How a team functions can be reflected in their team communication messages. To understand how students collaborate 1674 Type Behavior Definition Example Messages Team Building Invite or accept users Lauren, We would love to have you. to join the group Jill and I are both ESL specialists in Boston. Task Initiate a task or assign Housekeeping Task 3 is optional but below are Management subtask to a team member the questions I summarize and submit for our team. Collaboration Collaborate with teammates, I figured out how to use the Google Docs. provide help or feedback Let’s use it to share our lesson plans. Table 1: Three Different Types of Team Member Behaviors to contribute to teamwork success, we identified three main team member behaviors based on messages sent between team members as shown in Table 1. These annotations, which came from prior qualitative work analysing discussion contributions in the same dataset (Wen et al., 2015), are used to define component behaviors in this work. We design four variables to characterize the above collaboration behaviors: 1. Collaboration: the number of Collaboration messages sent by this team member. 2. Task Management: the number of Task Management messages sent by this team member. 3. Team Building: the number of Team Building messages sent by this team member. 4. Other Strategies: the number of messages that do not belong to the listed behavior categories. Communication Languages: Teams that work successfully typically exchange more knowledge and establish good social relations. To capture such evidence that is indicated in the language choice and linguistic styles of each team member, we design the following features: 5. Personal Pronouns: the proportion of first person and second person pronouns. 6. Negation: counts of negation words. 7. Question Words: counts of question related words in the posts, e.g. why, what, question, problem, how, answer, etc. 8. Discrepancy: number of occurrences of words, such as should, would, could, etc as defined in LIWC (Tausczik and Pennebaker, 2010). 9. Social Process: number of words that denote social processes and suggest human interaction, e.g. talking, sharing, etc. 10. Cognitive Process: number of occurrences of words that reflect thinking and reasoning, e.g. cause, because, thus, etc. 11-14. Polarity: four variables that measure the portion of Positive, Negative, Neutral, Both polarity words (Wilson et al., 2005) in the posts. 15-16. Subjectivity: two count variables of occurrences of Strong Subjectivity words and Weak Subjectivity words. Activities: We also introduce several variables to measure the activeness level of team members. 17-18. Messages: two variables that measure the total number of messages sent, and the number of tokens contained in the messages. 19-20. Videos: the number of videos a student has watched and total duration of watched videos. 21. Login Times: times that a student logins to the course. 4 Experiments In this section, we begin with the dataset description, and then we compare our models with several competitive baselines by performing 10-fold cross validation on two MOOCs, followed by a series of quantitative and qualitative analyses. 4.1 Dataset Our datasets come from a MOOC provider NovoEd, and consist of two MOOC courses. Both courses are teacher professional development courses about Constructive Classroom Conversations; one is in elementary education and another is about secondary education. Students in a NovoEd MOOC have to initiate or join a team in the beginning of the course. A NovoEd team homepage consists of blog posts, comments and other content shared within the group. The performance measure we use is the final team project score, which is in the range of 0 to 40. There are 57 teams (163 students) who survived until the end in the Elementary education course, and 77 teams (262 students) who survived for the Secondary course. The surviving teams are the ones in which none of the team members dropped out of the course, and who finished all the course requirements. For the purpose of varying teamwork roles K, we only keep the teams with 1675 at least 3 members. Self-identified team leader are labeled in the dataset. 4.2 Baselines We propose several baselines to extract possible roles and predict the teamwork quality for comparison with our models. Preprocessing is identical for baselines as for our approach. Top K Worst/Best: The worst performing student is often the bottleneck in a team, while the success of a team project largely depends on the outstanding students. Therefore, we use the top K worst/best performing students as our identified K roles. Their behavior representation are then used to predict the teamwork quality. The performing scores are only accessible after the course. K-Means Clustering: Students who are assigned to the same roles tend to have similar activity profiles. To capture the similarities of student behavior, we adopt a clustering method to group students in a team into K clusters, and then assign students to roles based on their distances to the centroid of clusters. Prediction is then performed on the basis of those corresponding behavior vectors. Here, we use K-Means method for clustering. That is, each cluster is a latent representation of a role and each student is assigned to its closest cluster (role). Leader: Leaders play important roles for the smooth functioning of teams, and thus might have substantial predictive power of team success. We input our role identification model with only the identified leader’s behavior representation and conduct our role identification algorithm as illustrated in Algorithm 1. Each team in our courses have a predefined leader. Average: The average representation of all team members is a good indication of team ability level and thus teamwork success. Here, we average all team members’ behavior feature vectors and use that to predict the teamwork quality. 4.3 Teamwork Quality Prediction Results The purpose of our role identification is to find a role assignment that minimizes the prediction error, thus we measure the performance of our models using RMSE (Rooted Mean Square Error). 10-fold Cross Validation is employed to test the overall performance. Table 2 and Table 3 presents the results of our proposed models and baselines on our two courses. Our role identification model shown in Algorithm 1, is denoted as RI. θ is set as 0.9 and we vary the role number K from 1 to 3 in order to assess the added value of each additional role over the first one. 4.3.1 Who Matters Most In a Team If we set the number of roles K as 1, what will the role identification pick as the most important person to the teamwork outcome? From Table 2 and 3, we find that, RI performs better than Leader, and either Top K Best gives a good RMSE in one course and Top K Worst gives a good RMSE in the other course. This indicates that, the predefined leader is not always functioning well in facilitating the teamwork, thus we need a more fair mechanism to select the proper leading role. Besides, Top K worst has quite good performance on the Elementary course, which reflects that the success of a teamwork is to some extent dependent on the worst performing student in that team. The best performing student matters for the teamwork outcome on the Secondary course. 4.3.2 Multi-Role Collaboration From Table 2 and 3, in the setting of K=3, RI achieved better results compared to Top K Best, Top K Worst and K-means methods. One explanation is that our RI model not only considers individual student’s behaviors, but also takes into account the collaboration patterns through all teamwork. Besides, RI achieves better performance compared to our baselines as K becomes larger. We also noticed that Top K Best gives a quite good approximation to the teamwork quality on both courses. However, such performing scores that are used to rank students are not accessible until the course ends, and have high correlation with team score. Thus an advantage of our RI model is that it does not make use of that information. Compared with all other results, our RI has a good generalization ability, and achieves both a smallest RMSE of around 10 across both MOOCs. 4.4 Role Assignment Validation We demonstrate the predicative power of our identified roles to team success above. In this part, we interpret the identified roles guided by different constraints in a team qualitatively, and show how different roles are distributed in a team, how each role contributes to teamwork, and how collaboration happens among the roles. 1676 Table 2: RMSE Comparison of Different Methods on the Elementary Course Average Leader K-Means K Worst K Best RI RIC RIC− RIC+ K = 1 13.945 16.957 14.212 13.092 20.464 14.982 N/A N/A N/A K = 2 N/A N/A 13.160 13.428 15.591 11.581 N/A N/A N/A K = 3 N/A N/A 12.291 15.460 14.251 9.517 10.486 27.314 10.251 Table 3: RMSE Comparison of Different Methods on the Secondary Course Average Leader K-Means K Worst K Best RI RIC RIC− RIC+ K = 1 12.571 15.611 12.583 17.899 10.886 13.297 N/A N/A N/A K = 2 N/A N/A 12.288 19.268 11.245 10.435 N/A N/A N/A K = 3 N/A N/A 11.218 22.933 14.079 10.143 10.961 24.583 10.427 4.4.1 Constraint Exploration By incorporating constraints into the role identification process, we expect to guide the model using human intuition such that the results will be more interpretable, although the prediction error might increase because of the limitation of the search space. We present three alternative possible constrained models here. The RIC model emphasizes picking one best member, one worst member and another generic member, which is achieved by putting one role to S+ and one to S− as defined in Equation 2. RIC+ aims at picking three best team members who collaborate to make the best contribution to the team success, achieved by putting three roles into S+. Similarly, RIC− rewards poorly performing students to contribute to teamwork quality, i.e. putting all roles into S−. Based on results shown in Table 2 and 3, we found that RIC+ and RIC work similar as RI even though RI is slightly better. RIC−gives quite unsatisfying performance which shows that examining the behavior of a set of poorly performing students is not very helpful in predicting teamwork success. The comparison of RIC+ and RIC−can be shown clearly in Figure 2, which presents the behavior representation of each role identified by RIC+ and RIC−. Obviously, RIC+ produces positive roles that contribute largely to the teamwork quality across all feature dimensions; such behaviors are what we want to encourage. Those identified roles are diverse and not symmetrical because each role achieves peaks at different feature dimensions. On the contrary, roles identified by RIC−works negatively towards teamwork quality and they have homogeneous behavior representation curves. Therefore, our constrained models can provide much interpretation, with a little loss of accuracy compared to RI. 4.4.2 Role Assignment Interpretation Leading Role Validation: As a validation, we found that one of our identified roles has substantial overlap with team leaders. For instance, in the Elementary course, around 70% of students who are assigned to Role 0 are actual leaders for RIC and RIC+ models. On the Secondary course, around 86% students who are in the position of Role 0 are real team leaders. When it comes to RIC−, such ratio drops to around 2% for all roles. This validates the ability of our models in producing role definitions that make sense. Information Diffusion: Figure 3 compares the information diffusion among different identified roles of RI, RIC, RIC+ and RIC−. The darker the node, the better grade it achieves. The number associated with each role indicates the average final grades (scale 0-100) of all students who are assigned to this role. The edge represents how many messages sent from one node to another. The thicker the edge, the more information it conveys. From the figure, we found that, RI performs similarly with RIC and roles in RIC+ have much higher grades compared to RIC−. One explanation is that RIC actually does not incorporate many constraints and is less interpretable compared to RIC+ and RIC−. As shown in (c), RIC+ Role 0 contributes more information to Role 1 with an average of 5.5 messages and to Role 2 with weight 6.1. Role 1 and Role 2 also have many messages communicated with others in their team. However, less communication happens in RIC−roles. This comparison comes much easier when it comes to each role’s behaviors on different normalized feature representations as shown in Figure 2 for 1677 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 RIC+ role_0 RIC+ role_1 RIC+ role_2 RIC- role_0 RIC- role_1 RIC- role_2 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 RIC+ role_0 RIC+ role_1 RIC+ role_2 RIC- role_0 RIC- role_1 RIC- role_2 Figure 2: Beahvior Representation of Each Role on the Secondary Course Typical Behavior Representative Post RIC+ Team Building I started a new doc ... Let me know your email if you didn’t get the invite. Positive Great job team!! Our lesson plan is amazing and I learned so much ... Collaboration We plan to meet on Monday to figure out exactly how to complete the assignment ... Task Management Here’s what I propose: 1) to save time, use ... 2) Tara, do you have plans ... 3) once a lesson plan outline is up, we can each go in and add modifications.. RIC− Negative I’m confused. I answered all the questions ... and I didn’t see ... Strong Subjectivity I like the recycling lesson ... feeling so dumb.. really confused by Google Docs... Negation I’m not able to ... the pictures don’t show up...I don’t understand how to create a link.. Table 4: Representative Posts and Corresponding Behavior Feature Comparison on the Secondary Course RIC+ and RIC−models. It can be concluded that by incorporating rewarding and penalizing constraints, our model works effectively in picking the behavior profiles we want to encourage and avoid in a teamwork. Behavior Comparison: Table 4 presents several representative posts and their corresponding behavior features for our identified roles. Most features shown in Table 4 correspond to the peak behaviors associated with roles in Figure 2, which is consistent with our previous interpretation. For example, RIC+ picks the well performing student who adds calmness to the teamwork as indicated by using positive words and adopting collaborative strategies. On the contrary, RIC−reflects a less cooperative teamwork, such as strong subjectivity, negation and negativity indicated in their posts. In summary, our role identification models provide quite interpretable identified roles as discussed above, as well as accurate prediction of teamwork quality. More interpretability can be achieved by incorporating intuitive constraints and sacrificing a bit of accuracy. 5 Conclusion In this work, we propose a role identification model, which iteratively optimizes a team member role assignment that can predict the teamwork quality to the utmost extent. Furthermore, we extend it to a general constrained version that enables humans to incorporate external constraints to guide the identification of roles. The experimental results on two MOOCs show that both of our proposed role identification models can not only perform accurate predictions of teamwork quality, but also provide interpretable student role assignment results ranging from leading role validation to information diffusion. Even though we have only explored up to 3 roles in this work that would enable us to use most 1678 R0 R2 R1 (b) Secondary RIC 48.63 33.25 39.97 R0 R2 R1 R0 R2 R1 (c) Secondary RIC+ (d) Secondary RIC44.32 46.11 45.32 40.45 34.34 39.98 R0 R2 R1 (a) Secondary RI 48.63 33.25 39.79 Figure 3: Information Diffusion among Roles of our data, our role identification method is capable to experiment with a larger range of values of K, such as in the context of Wikipedia (Ferschke et al., 2015). Furthermore, our model can be directly applied to other online collaboration scenarios to help identify the roles that contribute to collaboration, not limited in the context of MOOCs. In the future, we are interested in relaxing the assumptions that people can take only one role and roles are taken up by only one person and incorporating mixed membership role matching strategies into our method. Furthermore, nonlinear relationship between roles and performance as well as the dependencies between roles should be explored. Last but not least, we plan to take advantage of our identified roles to provide guidance and recommendation to those weakly performing teams for better collaboration and engagement in online teamworks. Acknowledgement The authors would like to thank Hanxiao Liu, Jingbo Shang, Oliver Ferschke and the anonymous reviewers for their valuable comments and suggestions. This research was funded in part by NSF grant IIS-1320064, an Army Research Lab seedling grant, and funding from Google. References Ravindra K. Ahuja, Thomas L. Magnanti, and James B. Orlin. 1993. Network Flows: Theory, Algorithms, and Applications. Prentice-Hall, Inc., Upper Saddle River, NJ, USA. Edoardo M. Airoldi, David M. Blei, Stephen E. Fienberg, and Eric P. Xing. 2008. Mixed membership stochastic blockmodels. volume 9, pages 1981– 2014. JMLR.org, June. Aris Anagnostopoulos, Luca Becchetti, Carlos Castillo, Aristides Gionis, and Stefano Leonardi. 2010. Power in unity: Forming teams in large-scale community systems. In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM ’10, pages 599–608, New York, NY, USA. ACM. David Bamman, Brendan O’Connor, and Noah A. Smith. 2013. Learning latent personas of film characters. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 352–361, Sofia, Bulgaria, August. Association for Computational Linguistics. David Bamman, Ted Underwood, and Noah A Smith. 2014. A bayesian mixed effects model of literary character. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, volume 1, pages 370–379. Sumit Bhatia, Prakhar Biyani, and Prasenjit Mitra. 2014. Summarizing online forum discussions – can dialog acts of individual messages help? In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2127–2131, Doha, Qatar, October. Association for Computational Linguistics. Anais Cadilhac, Nicholas Asher, Farah Benamara, and Alex Lascarides. 2013. Grounding strategic conversation: Using negotiation dialogues to predict trades in a win-lose game. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 357–368, Seattle, Washington, USA, October. Association for Computational Linguistics. Oliver Ferschke, Diyi Yang, and Carolyn Ros´e. 2015. A lightly supervised approach to role identification in wikipedia talk page discussions. Mathilde Forestier, Anna Stavrianou, Julien Velcin, and Djamel A. Zighed. 2012. Roles in social networks: Methodologies and research issues. Web Intelli. and Agent Sys., 10(1):117–133, January. Sebastian Germesin and Theresa Wilson. 2009. Agreement detection in multiparty conversation. In Proceedings of the 2009 International Conference on Multimodal Interfaces, ICMI-MLMI ’09, pages 7–14, New York, NY, USA. ACM. 1679 Kazi Saidul Hasan and Vincent Ng. 2014. Why are you taking this stance? identifying and classifying reasons in ideological debates. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 751–762, Doha, Qatar, October. Association for Computational Linguistics. Arthur E. Hoerl and Robert W. Kennard. 2000. Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 42(1):80–86, February. Xia Hu and Huan Liu. 2012. Social status and role analysis of palin’s email network. In Proceedings of the 21st International Conference Companion on World Wide Web, WWW ’12 Companion, pages 531–532, New York, NY, USA. ACM. Meiqun Hu, Ee-Peng Lim, and Ramayya Krishnan. 2009. Predicting outcome for collaborative featured article nomination in wikipedia. In Third International AAAI Conference on Weblogs and Social Media. Aniket Kittur and Robert E. Kraut. 2010. Beyond wikipedia: Coordination and conflict in online production groups. In Proceedings of the 2010 ACM Conference on Computer Supported Cooperative Work, CSCW ’10, pages 215–224, New York, NY, USA. ACM. R Meredith Belbin. 2011. Management teams: Why they succeed or fail. Human Resource Management International Digest, 19(3). Arjun Mukherjee, Vivek Venkataraman, Bing Liu, and Sharon Meraz. 2013. Public dialogue: Analysis of tolerance in online discussions. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1680–1690, Sofia, Bulgaria, August. Association for Computational Linguistics. George A Neuman, Stephen H Wagner, and Neil D Christiansen. 1999. The relationship between workteam personality composition and the job performance of teams. Group & Organization Management, 24(1):28–45. Michael J. Paul. 2012. Mixed membership markov models for unsupervised conversation modeling. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL 12, pages 94–104, Stroudsburg, PA, USA. Association for Computational Linguistics. Mario Piergallini, A Seza Do˘gru¨oz, Phani Gadde, David Adamson, and Carolyn P Ros´e. 2014. Modeling the use of graffiti style features to signal social relations within a multi-domain learning paradigm. EACL 2014, page 107. Florian A Potra and Stephen J Wright. 2000. Interiorpoint methods. Journal of Computational and Applied Mathematics, 124(1):281–302. K Ahuja Ravindra, Thomas L Magnanti, and James B Orlin. 1993. Network flows: theory, algorithms, and applications. Barbara Senior. 1997. Team roles and team performance: is there reallya link? Journal of occupational and organizational psychology, 70(3):241– 258. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 - Volume 1, ACL ’09, pages 226–234, Stroudsburg, PA, USA. Association for Computational Linguistics. Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: Liwc and computerized text analysis methods. Journal of language and social psychology, 29(1):24–54. Matt Thomas, Bo Pang, and Lillian Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, EMNLP ’06, pages 327–335, Stroudsburg, PA, USA. Association for Computational Linguistics. Marilyn A. Walker, Pranav Anand, Robert Abbott, and Ricky Grant. 2012. Stance classification using dialogic properties of persuasion. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 592–596, Stroudsburg, PA, USA. Association for Computational Linguistics. Byron C Wallace, Thomas A Trikalinos, M Barton Laws, Ira B Wilson, and Eugene Charniak. 2013. A generative joint, additive, sequential model of topics and speech acts in patient-doctor communication. In EMNLP, pages 1765–1775. Miaomiao Wen, Diyi Yang, and Carolyn Penstein Ros´e. 2015. Virtual teams in massive open online courses. In Artificial Intelligence in Education. Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phraselevel sentiment analysis. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 347–354. Association for Computational Linguistics. Yuchen Zhao, Guan Wang, Philip S. Yu, Shaobo Liu, and Simon Zhang. 2013. Inferring social roles and statuses in social networks. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 695–703, New York, NY, USA. ACM. 1680
2015
161
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1681–1691, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Deep Unordered Composition Rivals Syntactic Methods for Text Classification Mohit Iyyer,1 Varun Manjunatha,1 Jordan Boyd-Graber,2 Hal Daum´e III1 1University of Maryland, Department of Computer Science and UMIACS 2University of Colorado, Department of Computer Science {miyyer,varunm,hal}@umiacs.umd.edu, [email protected] Abstract Many existing deep learning models for natural language processing tasks focus on learning the compositionality of their inputs, which requires many expensive computations. We present a simple deep neural network that competes with and, in some cases, outperforms such models on sentiment analysis and factoid question answering tasks while taking only a fraction of the training time. While our model is syntactically-ignorant, we show significant improvements over previous bag-of-words models by deepening our network and applying a novel variant of dropout. Moreover, our model performs better than syntactic models on datasets with high syntactic variance. We show that our model makes similar errors to syntactically-aware models, indicating that for the tasks we consider, nonlinearly transforming the input is more important than tailoring a network to incorporate word order and syntax. 1 Introduction Vector space models for natural language processing (NLP) represent words using low dimensional vectors called embeddings. To apply vector space models to sentences or documents, one must first select an appropriate composition function, which is a mathematical process for combining multiple words into a single vector. Composition functions fall into two classes: unordered and syntactic. Unordered functions treat input texts as bags of word embeddings, while syntactic functions take word order and sentence structure into account. Previously published experimental results have shown that syntactic functions outperform unordered functions on many tasks (Socher et al., 2013b; Kalchbrenner and Blunsom, 2013). However, there is a tradeoff: syntactic functions require more training time than unordered composition functions and are prohibitively expensive in the case of huge datasets or limited computing resources. For example, the recursive neural network (Section 2) computes costly matrix/tensor products and nonlinearities at every node of a syntactic parse tree, which limits it to smaller datasets that can be reliably parsed. We introduce a deep unordered model that obtains near state-of-the-art accuracies on a variety of sentence and document-level tasks with just minutes of training time on an average laptop computer. This model, the deep averaging network (DAN), works in three simple steps: 1. take the vector average of the embeddings associated with an input sequence of tokens 2. pass that average through one or more feedforward layers 3. perform (linear) classification on the final layer’s representation The model can be improved by applying a novel dropout-inspired regularizer: for each training instance, randomly drop some of the tokens’ embeddings before computing the average. We evaluate DANs on sentiment analysis and factoid question answering tasks at both the sentence and document level in Section 4. Our model’s successes demonstrate that for these tasks, the choice of composition function is not as important as initializing with pretrained embeddings and using a deep network. Furthermore, DANs, unlike more complex composition functions, can be effectively trained on data that have high syntactic variance. A 1681 qualitative analysis of the learned layers suggests that the model works by magnifying tiny but meaningful differences in the vector average through multiple hidden layers, and a detailed error analysis shows that syntactically-aware models actually make very similar errors to those of the more na¨ıve DAN. 2 Unordered vs. Syntactic Composition Our goal is to marry the speed of unordered functions with the accuracy of syntactic functions. In this section, we first describe a class of unordered composition functions dubbed “neural bagof-words models” (NBOW). We then explore more complex syntactic functions designed to avoid many of the pitfalls associated with NBOW models. Finally, we present the deep averaging network (DAN), which stacks nonlinear layers over the traditional NBOW model and achieves performance on par with or better than that of syntactic functions. 2.1 Neural Bag-of-Words Models For simplicity, consider text classification: map an input sequence of tokens X to one of k labels. We first apply a composition function g to the sequence of word embeddings vw for w ∈X. The output of this composition function is a vector z that serves as input to a logistic regression function. In our instantiation of NBOW, g averages word embeddings1 z = g(w ∈X) = 1 |X| X w∈X vw. (1) Feeding z to a softmax layer induces estimated probabilities for each output label ˆy = softmax(Ws · z + b), (2) where the softmax function is softmax(q) = exp q Pk j=1 exp qj (3) Ws is a k × d matrix for a dataset with k output labels, and b is a bias term. We train the NBOW model to minimize crossentropy error, which for a single training instance with ground-truth label y is ℓ(ˆy) = k X p=1 yp log(ˆyp). (4) 1Preliminary experiments indicate that averaging outperforms the vector sum used in NBOW from Kalchbrenner et al. (2014). Before we describe our deep extension of the NBOW model, we take a quick detour to discuss syntactic composition functions. Connections to other representation frameworks are discussed further in Section 4. 2.2 Considering Syntax for Composition Given a sentence like “You’ll be more entertained getting hit by a bus”, an unordered model like NBOW might be deceived by the word “entertained” to return a positive prediction. In contrast, syntactic composition functions rely on the order and structure of the input to learn how one word or phrase affects another, sacrificing computational efficiency in the process. In subsequent sections, we argue that this complexity is not matched by a corresponding gain in performance. Recursive neural networks (RecNNs) are syntactic functions that rely on natural language’s inherent structure to achieve state-of-the-art accuracies on sentiment analysis tasks (Tai et al., 2015). As in NBOW, each word type has an associated embedding. However, the composition function g now depends on a parse tree of the input sequence. The representation for any internal node in a binary parse tree is computed as a nonlinear function of the representations of its children (Figure 1, left). A more powerful RecNN variant is the recursive neural tensor network (RecNTN), which modifies g to include a costly tensor product (Socher et al., 2013b). While RecNNs can model complex linguistic phenomena like negation (Hermann et al., 2013), they require much more training time than NBOW models. The nonlinearities and matrix/tensor products at each node of the parse tree are expensive, especially as model dimensionality increases. RecNNs also require an error signal at every node. One root softmax is not strong enough for the model to learn compositional relations and leads to worse accuracies than standard bag-of-words models (Li, 2014). Finally, RecNNs require relatively consistent syntax between training and test data due to their reliance on parse trees and thus cannot effectively incorporate out-of-domain data, as we show in our question-answering experiments. Kim (2014) shows that some of these issues can be avoided by using a convolutional network instead of a RecNN, but the computational complexity increases even further (see Section 4 for runtime comparisons). What contributes most to the power of syntactic 1682 Predator c1 is c2 a c3 masterpiece c4 z1 = f(W c3 c4  + b) z2 = f(W c2 z1  + b) z3 = f(W c1 z2  + b) softmax softmax softmax RecNN Predator c1 is c2 a c3 masterpiece c4 av = 4P i=1 ci 4 h1 = f(W1 · av + b1) h2 = f(W2 · h1 + b2) softmax DAN Figure 1: On the left, a RecNN is given an input sentence for sentiment classification. Softmax layers are placed above every internal node to avoid vanishing gradient issues. On the right is a two-layer DAN taking the same input. While the RecNN has to compute a nonlinear representation (purple vectors) for every node in the parse tree of its input, this DAN only computes two nonlinear layers for every possible input. functions: the compositionality or the nonlinearities? Socher et al. (2013b) report that removing the nonlinearities from their RecNN models drops performance on the Stanford Sentiment Treebank by over 5% absolute accuracy. Most unordered functions are linear mappings between bag-of-words features and output labels, so might they suffer from the same issue? To isolate the effects of syntactic composition from the nonlinear transformations that are crucial to RecNN performance, we investigate how well a deep version of the NBOW model performs on tasks that have recently been dominated by syntactically-aware models. 3 Deep Averaging Networks The intuition behind deep feed-forward neural networks is that each layer learns a more abstract representation of the input than the previous one (Bengio et al., 2013). We can apply this concept to the NBOW model discussed in Section 2.1 with the expectation that each layer will increasingly magnify small but meaningful differences in the word embedding average. To be more concrete, take s1 as the sentence “I really loved Rosamund Pike’s performance in the movie Gone Girl” and generate s2 and s3 by replacing “loved” with “liked” and then again by “despised”. The vector averages of these three sentences are almost identical, but the averages associated with the synonymous sentences s1 and s2 are slightly more similar to each other than they are to s3’s average. Could adding depth to NBOW make small such distinctions as this one more apparent? In Equation 1, we compute z, the vector representation for input text X, by averaging the word vectors vw∈X. Instead of directly passing this representation to an output layer, we can further transform z by adding more layers before applying the softmax. Suppose we have n layers, z1...n. We compute each layer zi = g(zi−1) = f(Wi · zi−1 + bi) (5) and feed the final layer’s representation, zn, to a softmax layer for prediction (Figure 1, right). This model, which we call a deep averaging network (DAN), is still unordered, but its depth allows it to capture subtle variations in the input better than the standard NBOW model. Furthermore, computing each layer requires just a single matrix multiplication, so the complexity scales with the number of layers rather than the number of nodes in a parse tree. In practice, we find no significant difference between the training time of a DAN and that of the shallow NBOW model. 3.1 Word Dropout Improves Robustness Dropout regularizes neural networks by randomly setting hidden and/or input units to zero with some probability p (Hinton et al., 2012; Srivastava et al., 2014). Given a neural network with n units, dropout prevents overfitting by creating an ensemble of 2n different networks that share parameters, where each network consists of some combination of dropped and undropped units. Instead of dropping units, a natural extension for the DAN model is to randomly drop word tokens’ entire word embeddings from the vector average. Using this method, 1683 which we call word dropout, our network theoretically sees 2|X| different token sequences for each input X. We posit a vector r with |X| independent Bernoulli trials, each of which equals 1 with probability p. The embedding vw for token w in X is dropped from the average if rw is 0, which exponentially increases the number of unique examples the network sees during training. This allows us to modify Equation 1: rw ∼Bernoulli(p) (6) ˆX = {w|w ∈X and rw > 0} (7) z = g(w ∈X) = P w∈ˆ X vw | ˆX| . (8) Depending on the choice of p, many of the “dropped” versions of an original training instance will be very similar to each other, but for shorter inputs this is less likely. We might drop a very important token, such as “horrible” in “the crab rangoon was especially horrible”; however, since the number of word types that are predictive of the output labels is low compared to non-predictive ones (e.g., neutral words in sentiment analysis), we always see improvements using this technique. Theoretically, word dropout can also be applied to other neural network-based approaches. However, we observe no significant performance differences in preliminary experiments when applying word dropout to leaf nodes in RecNNs for sentiment analysis (dropped leaf representations are set to zero vectors), and it slightly hurts performance on the question answering task. 4 Experiments We compare DANs to both the shallow NBOW model as well as more complicated syntactic models on sentence and document-level sentiment analysis and factoid question answering tasks. The DAN architecture we use for each task is almost identical, differing across tasks only in the type of output layer and the choice of activation function. Our results show that DANs outperform other bag-ofwords models and many syntactic models with very little training time.2 On the question-answering task, DANs effectively train on out-of-domain data, while RecNNs struggle to reconcile the syntactic differences between the training and test data. 2Code at http://github.com/miyyer/dan. Model RT SST SST IMDB Time fine bin (s) DAN-ROOT — 46.9 85.7 — 31 DAN-RAND 77.3 45.4 83.2 88.8 136 DAN 80.3 47.7 86.3 89.4 136 NBOW-RAND 76.2 42.3 81.4 88.9 91 NBOW 79.0 43.6 83.6 89.0 91 BiNB — 41.9 83.1 — — NBSVM-bi 79.4 — — 91.2 — RecNN∗ 77.7 43.2 82.4 — — RecNTN∗ — 45.7 85.4 — — DRecNN — 49.8 86.6 — 431 TreeLSTM — 50.6 86.9 — — DCNN∗ — 48.5 86.9 89.4 — PVEC∗ — 48.7 87.8 92.6 — CNN-MC 81.1 47.4 88.1 — 2,452 WRRBM∗ — — — 89.2 — Table 1: DANs achieve comparable sentiment accuracies to syntactic functions (bottom third of table) but require much less training time (measured as time of a single epoch on the SST fine-grained task). Asterisked models are initialized either with different pretrained embeddings or randomly. 4.1 Sentiment Analysis Recently, syntactic composition functions have revolutionized both fine-grained and binary (positive or negative) sentiment analysis. We conduct sentence-level sentiment experiments on the Rotten Tomatoes (RT) movie reviews dataset (Pang and Lee, 2005) and its extension with phrase-level labels, the Stanford Sentiment Treebank (SST) introduced by Socher et al. (2013b). Our model is also effective on the document-level IMDB movie review dataset of Maas et al. (2011). 4.1.1 Neural Baselines Most neural approaches to sentiment analysis are variants of either recursive or convolutional networks. Our recursive neural network baselines include standard RecNNs (Socher et al., 2011b), RecNTNs, the deep recursive network (DRecNN) proposed by ˙Irsoy and Cardie (2014), and the TREE-LSTM of (Tai et al., 2015). Convolutional network baselines include the dynamic convolutional network (Kalchbrenner et al., 2014, DCNN) and the convolutional neural network multichannel (Kim, 2014, CNN-MC). Our other neural baselines are the sliding-window based paragraph vector (Le and Mikolov, 2014, PVEC)3 and 3PVEC is computationally expensive at both training and test time and requires enough memory to store a vector for every paragraph in the training data. 1684 the word-representation restricted Boltzmann machine (Dahl et al., 2012, WRRBM), which only works on the document-level IMDB task.4 4.1.2 Non-Neural Baselines We also compare to non-neural baselines, specifically the bigram na¨ıve Bayes (BINB) and na¨ıve Bayes support vector machine (NBSVM-BI) models introduced by Wang and Manning (2012), both of which are memory-intensive due to huge feature spaces of size |V |2. 4.1.3 DAN Configurations In Table 1, we compare a variety of DAN and NBOW configurations5 to the baselines described above. In particular, we are interested in not only comparing DAN accuracies to those of the baselines, but also how initializing with pretrained embeddings and restricting the model to only root-level labels affects performance. With this in mind, the NBOW-RAND and DAN-RAND models are initialized with random 300-dimensional word embeddings, while the other models are initialized with publicly-available 300-d GloVe vectors trained over the Common Crawl (Pennington et al., 2014). The DAN-ROOT model only has access to sentence-level labels for SST experiments, while all other models are trained on labeled phrases (if they exist) in addition to sentences. We train all NBOW and DAN models using AdaGrad (Duchi et al., 2011). We apply DANs to documents by averaging the embeddings for all of a document’s tokens and then feeding that average through multiple layers as before. Since the representations computed by DANs are always d-dimensional vectors regardless of the input size, they are efficient with respect to both memory and computational cost. We find that the hyperparameters selected on the SST also work well for the IMDB task. 4.1.4 Dataset Details We evaluate over both fine-grained and binary sentence-level classification tasks on the SST, and just the binary task on RT and IMDB. In the finegrained SST setting, each sentence has a label from zero to five where two is the neutral class. For the binary task, we ignore all neutral sentences.6 4The WRRBM is trained using a slow Metropolis-Hastings algorithm. 5Best hyperparameters chosen by cross-validation: three 300-d ReLu layers, word dropout probability p = 0.3, L2 regularization weight of 1e-5 applied to all parameters 6Our fine-grained SST split is {train: 8,544, dev: 1,101, test: 2,210}, while our binary split is {train: 6,920, dev:872, 4.1.5 Results The DAN achieves the second best reported result on the RT dataset, behind only the significantly slower CNN-MC model. It’s also competitive with more complex models on the SST and outperforms the DCNN and WRRBM on the document-level IMDB task. Interestingly, the DAN achieves good performance on the SST when trained with only sentence-level labels, indicating that it does not suffer from the vanishing error signal problem that plagues RecNNs. Since acquiring labelled phrases is often expensive (Sayeed et al., 2012; Iyyer et al., 2014b), this result is promising for large or messy datasets where fine-grained annotation is infeasible. 4.1.6 Timing Experiments DANs require less time per epoch and—in general— require fewer epochs than their syntactic counterparts. We compare DAN runtime on the SST to publicly-available implementations of syntactic baselines in the last column of Table 1; the reported times are for a single epoch to control for hyperparameter choices such as learning rate, and all models use 300-d word vectors. Training a DAN on just sentence-level labels on the SST takes under five minutes on a single core of a laptop; when labeled phrases are added as separate training instances, training time jumps to twenty minutes.7 All timing experiments were performed on a single core of an Intel I7 processor with 8GB of RAM. 4.2 Factoid Question Answering DANs work well for sentiment analysis, but how do they do on other NLP tasks? We shift gears to a paragraph-length factoid question answering task and find that our model outperforms other unordered functions as well as a more complex syntactic RecNN model. More interestingly, we find that unlike the RecNN, the DAN significantly benefits from out-of-domain Wikipedia training data. Quiz bowl is a trivia competition in which players are asked four-to-six sentence questions about entities (e.g., authors, battles, or events). It is an ideal task to evaluate DANs because there is prior test:1,821}. Split sizes increase by an order of magnitude when labeled phrases are added to the training set. For RT, we do 10-fold CV over a balanced binary dataset of 10,662 sentences. Similarly, for the IMDB experiments we use the provided balanced binary training set of 25,000 documents. 7We also find that DANs take significantly fewer epochs to reach convergence than syntactic models. 1685 Model Pos 1 Pos 2 Full Time(s) BoW-DT 35.4 57.7 60.2 — IR 37.5 65.9 71.4 N/A QANTA 47.1 72.1 73.7 314 DAN 46.4 70.8 71.8 18 IR-WIKI 53.7 76.6 77.5 N/A QANTA-WIKI 46.5 72.8 73.9 1,648 DAN-WIKI 54.8 75.5 77.1 119 Table 2: The DAN achieves slightly lower accuracies than the more complex QANTA in much less training time, even at early sentence positions where compositionality plays a bigger role. When Wikipedia is added to the training set (bottom half of table), the DAN outperforms QANTA and achieves comparable accuracy to a state-of-theart information retrieval baseline, which highlights a benefit of ignoring word order for this task. G G G G G G 69 70 71 0.0 0.1 0.2 0.3 0.4 0.5 Dropout Probability History QB Accuracy Effect of Word Dropout Figure 2: Randomly dropping out 30% of words from the vector average is optimal for the quiz bowl task, yielding a gain in absolute accuracy of almost 3% on the quiz bowl question dataset compared to the same model trained with no word dropout. work using both syntactic and unordered models for quiz bowl question answering. In Boyd-Graber et al. (2012), na¨ıve Bayes bag-of-words models (BOW-DT) and sequential language models work well on easy questions but poorly on harder ones. A dependency-tree RecNN called QANTA proposed in Iyyer et al. (2014a) shows substantial improvements, leading to the hypothesis that correctly modeling compositionality is crucial for answering hard questions. 4.2.1 Dataset and Experimental Setup To test this, we train a DAN over the history questions from Iyyer et al. (2014a).8 This dataset is aug8The training set contains 14,219 sentences over 3,761 questions. For more detail about data and baseline systems, mented with 49,581 sentence/page-title pairs from the Wikipedia articles associated with the answers in the dataset. For fair comparison with QANTA, we use a normalized tanh activation function at the last layer instead of ReLu, and we also change the output layer from a softmax to the margin ranking loss (Weston et al., 2011) used in QANTA. We initialize the DAN with the same pretrained 100d word embeddings that were used to initialize QANTA. We also evaluate the effectiveness of word dropout on this task in Figure 2. Cross-validation indicates that p = 0.3 works best for question answering, although the improvement in accuracy is negligible for sentiment analysis. Finally, continuing the trend observed in the sentiment experiments, DAN converges much faster than QANTA. 4.2.2 DANs Improve with Noisy Data Table 2 shows that while DAN is slightly worse than QANTA when trained only on question-answer pairs, it improves when trained on additional outof-domain Wikipedia data (DAN-WIKI), reaching performance comparable to that of a state-of-the-art information retrieval system (IR-WIKI). QANTA, in contrast, barely improves when Wikipedia data is added (QANTA-WIKI) possibly due to the syntactic differences between Wikipedia text and quiz bowl question text. The most common syntactic structures in quiz bowl sentences are imperative constructions such as “Identify this British author who wrote Wuthering Heights”, which are almost never seen in Wikipedia. Furthermore, the subject of most quiz bowl sentences is a pronoun or pronomial mention referring to the answer, a property that is not true of Wikipedia sentences (e.g., “Little of Emily’s work from this period survives, except for poems spoken by characters.”). Finally, many Wikipedia sentences do not uniquely identify the title of the page they come from, such as the following sentence from Emily Bront¨e’s page: “She does not seem to have made any friends outside her family.” While noisy data affect both DAN and QANTA, the latter is further hampered by the syntactic divergence between quiz bowl questions and Wikipedia, which may explain the lack of improvement in accuracy. see Iyyer et al. (2014a). 1686 0 10 20 30 40 50 0 1 2 3 4 5 Layer Perturbation Response cool okay the worst underwhelming Perturbation Response vs. Layer Figure 3: Perturbation response (difference in 1norm) at each layer of a 5-layer DAN after replacing awesome in the film’s performances were awesome with four words of varying sentiment polarity. While the shallow NBOW model does not show any meaningful distinctions, we see that as the network gets deeper, negative sentences are increasingly different from the original positive sentence. G G G G G G G G G G G G G G 83 84 85 86 87 0 2 4 6 Number of Layers Binary Classification Accuracy G G DAN DAN−ROOT Effect of Depth on Sentiment Accuracy Figure 4: Two to three layers is optimal for the DAN on the SST binary sentiment analysis task, but adding any depth at all is an improvement over the shallow NBOW model. 5 How Do DANs Work? In this section we first examine how the deep layers of the DAN amplify tiny differences in the vector average that are predictive of the output labels. Next, we compare DANs to DRecNNs on sentences that contain negations and contrastive conjunctions and find that both models make similar errors despite the latter’s increased complexity. Finally, we analyze the predictive ability of unsupervised word embeddings on a simple sentiment task in an effort to explain why initialization with these embeddings improves the DAN. 5.1 Perturbation Analysis Following the work of ˙Irsoy and Cardie (2014), we examine our network by measuring the response at each hidden layer to perturbations in an input sentence. In particular, we use the template the film’s performances were awesome and replace the final word with increasingly negative polarity words (cool, okay, underwhelming, the worst). For each perturbed sentence, we observe how much the hidden layers differ from those associated with the original template in 1-norm. Figure 3 shows that as a DAN gets deeper, the differences between negative and positive sentences become increasingly amplified. While nonexistent in the shallow NBOW model, these differences are visible even with just a single hidden layer, thus explaining why deepening the NBOW improves sentiment analysis as shown in Figure 4. 5.2 Handling Negations and “but”: Where Syntax is Still Needed While DANs outperform other bag-of-words models, how can they model linguistic phenomena such as negation without considering word order? To evaluate DANs over tougher inputs, we collect 92 sentences, each of which contains at least one negation and one contrastive conjunction, from the dev and test sets of the SST.9 Our fine-grained accuracy is higher on this subset than on the full dataset, improving almost five percent absolute accuracy to 53.3%. The DRecNN model of ˙Irsoy and Cardie (2014) obtains a similar accuracy of 51.1%, contrary to our intuition that syntactic functions should outperform unordered functions on sentences that clearly require syntax to understand.10 Are these sentences truly difficult to classify? A close inspection reveals that both the DAN and the DRecNN have an overwhelming tendency to predict negative sentiment (60.9% and 55.4% of the time for the DAN and DRecNN respectively) when they see a negation compared to positive sentiment (35.9% for DANs, 34.8% for DRecNNs). If we further restrict our subset of sentences to only those with positive ground truth labels, we find that while both models struggle, the DRecNN obtains 41.7% accuracy, outperforming the DAN’s 37.5%. To understand why a negation or contrastive conjunction triggers a negative sentiment prediction, 9We search for non-neutral sentences containing not / n’t, and but. 48 of the sentences are positive while 44 are negative. 10Both models are initialized with pretrained 300-d GloVe embeddings for fair comparison. 1687 Sentence DAN DRecNN Ground Truth a lousy movie that’s not merely unwatchable , but also unlistenable negative negative negative if you’re not a prepubescent girl , you’ll be laughing at britney spears ’ movie-starring debut whenever it does n’t have you impatiently squinting at your watch negative negative negative blessed with immense physical prowess he may well be, but ahola is simply not an actor positive neutral negative who knows what exactly godard is on about in this film , but his words and images do n’t have to add up to mesmerize you. positive positive positive it’s so good that its relentless , polished wit can withstand not only inept school productions , but even oliver parker ’s movie adaptation negative positive positive too bad , but thanks to some lovely comedic moments and several fine performances , it’s not a total loss negative negative positive this movie was not good negative negative negative this movie was good positive positive positive this movie was bad negative negative negative the movie was not bad negative negative positive Table 3: Predictions of DAN and DRecNN models on real (top) and synthetic (bottom) sentences that contain negations and contrastive conjunctions. In the first column, words colored red individually predict the negative label when fed to a DAN, while blue words predict positive. The DAN learns that the negators not and n’t are strong negative predictors, which means it is unable to capture double negation as in the last real example and the last synthetic example. The DRecNN does slightly better on the synthetic double negation, predicting a lower negative polarity. we show six sentences from the negation subset and four synthetic sentences in Table 3, along with both models’ predictions. The token-level predictions in the table (shown as colored boxes) are computed by passing each token through the DAN as separate test instances. The tokens not and n’t are strongly predictive of negative sentiment. While this simplified “negation” works for many sentences in the datasets we consider, it prevents the DAN from reasoning about double negatives, as in “this movie was not bad”. The DRecNN does slightly better in this case by predicting a lesser negative polarity than the DAN; however, we theorize that still more powerful syntactic composition functions (and more labelled instances of negation and related phenomena) are necessary to truly solve this problem. 5.3 Unsupervised Embeddings Capture Sentiment Our model consistently converges slower to a worse solution (dropping 3% in absolute accuracy on coarse-grained SST) when we randomly initialize the word embeddings. This does not apply to just DANs; both convolutional and recursive networks do the same (Kim, 2014; ˙Irsoy and Cardie, 2014). Why are initializations with these embeddings so crucial to obtaining good performance? Is it possible that unsupervised training algorithms are already capturing sentiment? We investigate this theory by conducting a simple experiment: given a sentiment lexicon containing both positive and negative words, we train a logistic regression to discriminate between the associated word embeddings (without any fine-tuning). We use the lexicon created by Hu and Liu (2004), which consists of 2,006 positive words and 4,783 negative words. We balance and split the dataset into 3,000 training words and 1,000 test words. Using 300-dimensional GloVe embeddings pretrained over the Common Crawl, we obtain over 95% accuracy on the unseen test set, supporting the hypothesis that unsupervised pretraining over large corpora can capture properties such as sentiment. Intuitively, after the embeddings are fine-tuned during DAN training, we might expect a decrease in the norms of stopwords and an increase in the 1688 norms of sentiment-rich words like “awesome” or “horrible”. However, we find no significant differences between the L2 norms of stopwords and words in the sentiment lexicon of Hu and Liu (2004). 6 Related Work Our DAN model builds on the successes of both simple vector operations and neural network-based models for compositionality. There are a variety of element-wise vector operations that could replace the average used in the DAN. Mitchell and Lapata (2008) experiment with many of them to model the compositionality of short phrases. Later, their work was extended to take into account the syntactic relation between words (Erk and Pad´o, 2008; Baroni and Zamparelli, 2010; Kartsaklis and Sadrzadeh, 2013) and grammars (Coecke et al., 2010; Grefenstette and Sadrzadeh, 2011). While the average works best for the tasks that we consider, Banea et al. (2014) find that simply summing word2vec embeddings outperforms all other methods on the SemEval 2014 phrase-to-word and sentence-to-phrase similarity tasks. Once we compute the embedding average in a DAN, we feed it to a deep neural network. In contrast, most previous work on neural network-based methods for NLP tasks explicitly model word order. Outside of sentiment analysis, RecNN-based approaches have been successful for tasks such as parsing (Socher et al., 2013a), machine translation (Liu et al., 2014), and paraphrase detection (Socher et al., 2011a). Convolutional networks also model word order in local windows and have achieved performance comparable to or better than that of RecNNs on many tasks (Collobert and Weston, 2008; Kim, 2014). Meanwhile, feedforward architectures like that of the DAN have been used for language modeling (Bengio et al., 2003), selectional preference acquisition (Van de Cruys, 2014), and dependency parsing (Chen and Manning, 2014). 7 Future Work In Section 5, we showed that the performance of our DAN model worsens on sentences that contain lingustic phenomena such as double negation. One promising future direction is to cascade classifiers such that syntactic models are used only when a DAN is not confident in its prediction. We can also extend the DAN’s success at incorporating out-of-domain training data to sentiment analysis: imagine training a DAN on labeled tweets for classification on newspaper reviews. Another potentially interesting application is to add gated units to a DAN,as has been done for recurrent and recursive neural networks (Hochreiter and Schmidhuber, 1997; Cho et al., 2014; Sutskever et al., 2014; Tai et al., 2015), to drop useless words rather than randomly-selected ones. 8 Conclusion In this paper, we introduce the deep averaging network, which feeds an unweighted average of word vectors through multiple hidden layers before classification. The DAN performs competitively with more complicated neural networks that explicitly model semantic and syntactic compositionality. It is further strengthened by word dropout, a regularizer that reduces input redundancy. DANs obtain close to state-of-the-art accuracy on both sentence and document-level sentiment analysis and factoid question-answering tasks with much less training time than competing methods; in fact, all experiments were performed in a matter of minutes on a single laptop core. We find that both DANs and syntactic functions make similar errors given syntactically-complex input, which motivates research into more powerful models of compositionality. Acknowledgments We thank Ozan ˙Irsoy not only for many insightful discussions but also for suggesting some of the experiments that we included in the paper. We also thank the anonymous reviewers, Richard Socher, Arafat Sultan, and the members of the UMD “Thinking on Your Feet” research group for their helpful comments. This work was supported by NSF Grant IIS-1320538. Boyd-Graber is also supported by NSF Grants CCF-1409287 and NCSE1422492. Any opinions, findings, conclusions, or recommendations expressed here are those of the authors and do not necessarily reflect the view of the sponsor. 1689 References Carmen Banea, Di Chen, Rada Mihalcea, Claire Cardie, and Janyce Wiebe. 2014. Simcompass: Using deep learning word embeddings to assess cross-level similarity. In SemEval. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjectivenoun constructions in semantic space. In Proceedings of Empirical Methods in Natural Language Processing. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(8):1798–1828. Jordan Boyd-Graber, Brianna Satinoff, He He, and Hal Daum´e III. 2012. Besting the quiz master: Crowdsourcing incremental classification games. In Proceedings of Empirical Methods in Natural Language Processing. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of Empirical Methods in Natural Language Processing. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoderdecoder for statistical machine translation. In Proceedings of Empirical Methods in Natural Language Processing. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributional model of meaning. Linguistic Analysis (Lambek Festschirft). Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the International Conference of Machine Learning. George E Dahl, Ryan P Adams, and Hugo Larochelle. 2012. Training restricted boltzmann machines on word observations. In Proceedings of the International Conference of Machine Learning. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research. Katrin Erk and Sebastian Pad´o. 2008. A structured vector space model for word meaning in context. In Proceedings of Empirical Methods in Natural Language Processing. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of Empirical Methods in Natural Language Processing. Karl Moritz Hermann, Edward Grefenstette, and Phil Blunsom. 2013. ”not not bad” is not ”bad”: A distributional account of negation. Proceedings of the ACL Workshop on Continuous Vector Space Models and their Compositionality. Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long shortterm memory. Neural computation. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Knowledge Discovery and Data Mining. Ozan ˙Irsoy and Claire Cardie. 2014. Deep recursive neural networks for compositionality in language. In Proceedings of Advances in Neural Information Processing Systems. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014a. A neural network for factoid question answering over paragraphs. In Proceedings of Empirical Methods in Natural Language Processing. Mohit Iyyer, Peter Enns, Jordan Boyd-Graber, and Philip Resnik. 2014b. Political ideology detection using recursive neural networks. In Proceedings of the Association for Computational Linguistics. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. In ACL Workshop on Continuous Vector Space Models and their Compositionality. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the Association for Computational Linguistics. Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2013. Prior disambiguation of word tensors for constructing sentence vectors. In Proceedings of Empirical Methods in Natural Language Processing. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of Empirical Methods in Natural Language Processing. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. In Proceedings of the International Conference of Machine Learning. Jiwei Li. 2014. Feature weight tuning for recursive neural networks. CoRR, abs/1412.3714. Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. 2014. A recursive recurrent neural network for statistical machine translation. In Proceedings of the Association for Computational Linguistics. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of the Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of the Association for Computational Linguistics. Bo Pang and Lillian Lee. 2005. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the Association for Computational Linguistics. 1690 Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of Empirical Methods in Natural Language Processing. Asad B. Sayeed, Jordan Boyd-Graber, Bryan Rusk, and Amy Weinberg. 2012. Grammatical structures for word-level sentiment detection. In North American Association of Computational Linguistics. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011a. Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. In Proceedings of Advances in Neural Information Processing Systems. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-Supervised Recursive Autoencoders for Predicting Sentiment Distributions. In Proceedings of Empirical Methods in Natural Language Processing. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing With Compositional Vector Grammars. In Proceedings of the Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of Empirical Methods in Natural Language Processing. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15(1). Ilya Sutskever, Oriol Vinyals, and Quoc VV Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of Advances in Neural Information Processing Systems. Kai Sheng Tai, Richard Socher, and Christopher D. Manning. 2015. Improved semantic representations from treestructured long short-term memory networks. Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of Empirical Methods in Natural Language Processing. Sida I. Wang and Christopher D. Manning. 2012. Baselines and bigrams: Simple, good sentiment and topic classification. In Proceedings of the Association for Computational Linguistics. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In International Joint Conference on Artificial Intelligence. 1691
2015
162
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1692–1701, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics SOLAR: Scalable Online Learning Algorithms for Ranking Jialei Wang1, Ji Wan2,3, Yongdong Zhang2, Steven C. H. Hoi3∗ 1 Department of Computer Science, The University of Chicago, USA 2 Key Laboratory of Intelligent Information Processing, ICT, CAS, China 3 School of Information Systems, Singapore Management University, Singapore [email protected], {wanji,zhyd}@ict.ac.cn, [email protected] Abstract Traditional learning to rank methods learn ranking models from training data in a batch and offline learning mode, which suffers from some critical limitations, e.g., poor scalability as the model has to be retrained from scratch whenever new training data arrives. This is clearly nonscalable for many real applications in practice where training data often arrives sequentially and frequently. To overcome the limitations, this paper presents SOLAR — a new framework of Scalable Online Learning Algorithms for Ranking, to tackle the challenge of scalable learning to rank. Specifically, we propose two novel SOLAR algorithms and analyze their IR measure bounds theoretically. We conduct extensive empirical studies by comparing our SOLAR algorithms with conventional learning to rank algorithms on benchmark testbeds, in which promising results validate the efficacy and scalability of the proposed novel SOLAR algorithms. 1 Introduction Learning to rank [27, 8, 29, 31, 7] aims to learn some ranking model from training data using machine learning methods, which has been actively studied in information retrieval (IR). Specifically, consider a document retrieval task, given a query, a ranking model assigns a relevance score to each document in a collection of documents, and then ranks the documents in decreasing order of relevance scores. The goal of learning to rank is to build a ranking model from training data of a set of queries by optimizing some IR performance measures using machine learning techniques. In literature, various learning to rank techniques have ∗The corresponding author. This work was done when the first two authors visited Dr Hoi’s group. been proposed, ranging from early pointwise approaches [15, 30, 28], to popular pairwise [26, 18, 3], and recent listwise approaches [5, 38]. Learning to rank has many applications, including document retrieval, collaborative filtering, online ad, answer ranking for online QA in NLP [33], etc. Most existing learning to rank techniques follow batch and offline machine learning methodology, which typically assumes all training data are available prior to the learning task and the ranking model is trained by applying some batch learning method, e.g., neural networks [3] or SVM [4]. Despite being studied extensively, the batch learning to rank methodology has some critical limitations. One of serious limitations perhaps is its poor scalability for real-world web applications, where the ranking model has to be re-trained from scratch whenever new training data arrives. This is apparently inefficient and non-scalable since training data often arrives sequentially and frequently in many real applications [33, 7]. Besides, batch learning to rank methodology also suffers from slow adaption to fast-changing environment of web applications due to the static ranking models pre-trained from historical batch training data. To overcome the above limitations, this paper investigates SOLAR — a new framework of Scalable Online Learning Algorithms for Ranking, which aims to learn a ranking model from a sequence of training data in an online learning fashion. Specifically, by following the pairwise learning to rank framework, we formally formulate the learning problem, and then present two different SOLAR algorithms to solve the challenging task together with the analysis of their theoretical properties. We conduct an extensive set of experiments by evaluating the performance of the proposed algorithms under different settings by comparing them with both online and batch algorithms on benchmark testbeds in literature. As a summary, the key contributions of this pa1692 per include: (i) we present a new framework of Scalable Online Learning Algorithms for Ranking, which tackles the pairwise learning to ranking problem via a scalable online learning approach; (ii) we present two SOLAR algorithms: a first-order learning algorithm (SOLAR-I) and a second-order learning algorithm (SOLAR-II); (iii) we analyze the theoretical bounds of the proposed algorithms in terms of standard IR performance measures; and (iv) finally we examine the efficacy of the proposed algorithms by an extensive set of empirical studies on benchmark datasets. The rest of this paper is organized as follows. Section 2 reviews related work. Section 3 gives problem formulations of the proposed framework and presents our algorithms, followed by theoretical analysis in Section 4. Section 5 presents our experimental results, and Section 6 concludes this work and indicates future directions. 2 Related Work In general, our work is related to two topics in information retrieval and machine learning: learning to rank and online learning. Both of them have been extensively studied in literature. Below we briefly review important related work in each area. 2.1 Learning to Rank Most of the existing approaches to learning to rank can be generally grouped into three major categories: (i) pointwise approaches, (ii) pairwise approaches, and (iii) listwise approaches. The pointwise approaches treat ranking as a classification or regression problem for predicting the ranking of individual objects. For example, [12, 19] formulated ranking as a regression problem in diverse forms. [30] formulated ranking a binary classification of relevance on document objects, and solved it by discriminative models (e.g., SVM). In [15], Perceptron [32] ranking (known as “Prank”) [15] formulated it as online binary classification. [28] cast ranking as multiple classification or multiple ordinal classification tasks. The pairwise approaches treat the document pairs as training instances and formulate ranking as a classification or regression problem from a collection of pairwise document instances. Example of pairwise learning to rank algorithms include: neural network approaches such as RankNet [3] and LambdaRank [2], SVM approaches such as RankSVM [26], boosting approaches such as RankBoost [18], regression algorithms such as GBRank [43], and probabilistic ranking algorithms such as FRank [35]. The pairwise group is among one of widely and successfully applied approaches. Our work generally belongs to this group. The listwise approaches treat a list of documents for a query as a training instance and attempt to learn a ranking model by optimizing some loss defined on the predicted list and the ground-truth list. In general, there are two types of approaches. The first is to directly optimize some IR metrics, such as Mean Average Precision (MAP) and Normalized Discounted Cumulative Gain (NDCG) [25]. Examples include AdaRank by boosting [39], SVM-MAP by optimizing MAP [42], PermuRank [40], and SoftRank [34] based on a smoothed approximation to NDCG, and NDCG-Boost by optimizing NDCG [37], etc. The other is to indirectly optimize the IR metrics by defining some listwise loss function, such as ListNet [5] and ListMLE [38]. Despite being studied actively, most existing works generally belong to batch learning methods, except a few online learning studies. For example, Prank [15] is probably the first online pointwise learning to ranking algorithm. Unlike Prank, our work focuses online pairwise learning to rank technique, which significantly outperforms Prank as observed in our empirical studies. Besides, our work is also related to another existing work in [10], but differs considerably in several aspects: (i) they assume the similarity function is defined in a bi-linear form which is inappropriate for document retrieval applications; (ii) their training data is given in the form of triplet-image instance (p1, p2, p3), while our training data is given in a pairwise query-document instance (qt, d1 t , d2 t ); (iii) they only apply first order online learning algorithms, while we explore both first-order and second-order online algorithms. Finally, we note that our work differs from another series of online learning to rank studies [21, 22, 23, 36, 41] which attempt to explore reinforcement learning or multi-arm bandit techniques for learning to rank from implicit/partial feedback, whose formulation and settings are very different. 2.2 Online Learning Our work is closely related to studies of online learning [24], representing a family of efficient 1693 and scalable machine learning algorithms. In literature, a variety of online algorithms have been proposed, mainly in two major categories: first-order algorithms and second-order algorithms. The notable examples of first-order online learning methods include classical Perceptron [32], and PassiveAggressive (PA) learning algorithms [13]. Unlike first-order algorithms, second-order online learning [6], e.g., Confidence-Weighted (CW) learning [16], usually assumes the weight vector follows a Gaussian distribution and attempts to update the mean and covariance for each received instance. In addition, Adaptive Regularization of Weights Learning (AROW) [14] was proposed to improve robustness of CW. More other online learning methods can be found in [24]. In this work, we apply both first-order and secondorder online learning methods for online learning to rank. 3 SOLAR — Online Learning to Rank We now present SOLAR — a framework of Scalable Online Learning Algorithms for Ranking, which applies online learning to build ranking models from sequential training instances. 3.1 Problem Formulation Without loss of generality, consider an online learning to rank problem for document retrieval, where training data instances arrive sequentially. Let us denote by Q a query space and denote by D a document space. Each instance received at time step t is represented by a triplet (q(i) t , d(1) t , d(2) t ), where q(i) t ∈Q denotes the i-th query in the entire collection of queries Q, d(1) t ∈D and d(2) t ∈D denote a pair of documents for prediction of ranking w.r.t. the query q(i) t . Without loss of clarity, for the rest of this paper, we simplify the notation q(i) t , d(1) t , d(2) t as qi t, d1 t , d2 t , respectively. We also denote by yt ∈{+1, −1} the true ranking order of the pairwise instances at step t such that if yt = +1, document d1 t is ranked before d2 t ; otherwise d1 t is ranked after d2 t . We introduce a mapping function φ : Q × D →Rn that creates a n-dimensional feature vector from a query-document pair. For example, consider φ(q, d) ∈Rn, one way to extract one of the n features is based on term frequency, which counts the number of times the query term of q occurs in document d. We also introduce wt ∈Rn as the ranking model to be learned at step t, which is used to form the target ranking function below: f(qi t, d1 t, d2 t) = w⊤ t φ(qi t, d1 t, d2 t) = wt ⊤(φ(qi t, d1 t) −φ(qi t, d2 t)) Assume that we have a total of Q queries {q(i)}Q i=1, each of which is associated with a total of Di documents and a total of Ti training triplet instances. In a practical document retrieval task, the online learning to rank framework operates in the following procedure: (i) Given a query q1, an initial model w1 is first applied to rank the set of documents for the query, which are then returned to users; (ii) We then collect user’s feedback (e.g., clickthrough data) as the ground truth labels for the ranking orders of a collection of T1 triplet training instances; (iii) We then apply an online learning algorithm to update the ranking model from the sequence of T1 triplet training instances; (iv) We repeat the above by applying the updated ranking model to process the next query. For a sequence of T triplet training instances, the goal of online learning to rank is to optimize the sequence of ranking models w1, . . . , wT during the entire online learning process. In general, the proposed online learning to rank scheme is evaluated by measuring the online cumulative MAP [1] or online cumulative NDCG [25]. Let us denote by NDCGi and MAPi the NDCG and MAP values for query qi, respectively, which are defined as follows: NDCGi = 1 Nn Di X r=1 G(l(πf(r)))D(r) (1) MAPi = 1 m X s:l(πf (s))=1 P j≤s I{l(πf (j))=1} s (2) where I{·} is an indicator function that outputs 1 when the statement is true and 0 otherwise; G(K) = 2K −1,D(K) = 1 log2(1+K), Nn = maxπ Pm r=1 G(l(π(r)))D(r), l(r) is the corresponding labels as K-level ratings, πf denote a rank list produced by ranking function f, m is the number of relevant documents. The online cumulative IR measure is defined as the average of the measure over a sequence of Q queries: NDCG = 1 Q Q X i=1 NDCGi MAP = 1 Q Q X i=1 MAPi (3) 1694 3.2 First-order SOLAR Algorithm The key challenge of online learning to rank is how to optimize the ranking model wt when receiving a training instance (qi t, d1 t , d2 t ) and its true label yt at each time step t. In the following, we apply the passive-aggressive online learning technique [13] to solve this challenge. First of all, we formulate the problem as an optimization: wt+1 = arg min w 1 2∥w −wt∥2 + Cℓ(w; (qi t, d1 t, d2 t), yt)2 (4) where ℓ(wt) is a hinge loss defined as ℓ(wt) = max(0, 1 −ytwt⊤(φ(qi t, d1 t ) −φ(qi t, d2 t ))), and C is a penalty cost parameter. The above optimization formulation aims to achieve a trade-off between two concerns: (i) the updated ranking model should not be deviated too much from the previous ranking model wt; and (ii) the updated ranking model should suffer a small loss on the triplet instance (qi t, d1 t , d2 t ). Their tradeoff is essentially controlled by the penalty cost parameter C. Finally, we can derive the following proposition for the solution to the above. Proposition 1. This optimization in (4) has the following closed-form solution: wt+1 = wt + λtyt(φ(qi t, d1 t) −φ(qi t, d2 t)) (5) where λt is computed as follows: λt = max(0, 1 −wt ⊤yt(φ(qi t, d1 t) −φ(qi t, d2 t))) ∥φ(qi t, d1 t) −φ(qi t, d2 t))∥2 + 1 2C (6) It is not difficult to derive the result in the above proposition by following the similar idea of passive aggressive online learning [13]. We omit the detailed proof here. We can see that if wt⊤yt(φ(qi t, d1 t ) −φ(qi t, d2 t )) ≥1, then the model remains unchanged, which means that if the current ranking model can correctly rank the order of d1 t and d2 t w.r.t. query qi t at a large margin, we can keep our model unchanged at this round; otherwise, we will update the current ranking model by the above proposition. Figure 1 gives the framework of the proposed online learning to rank algorithms. We denote by the first-order learning to rank algorithm as “SOLAR-I” for short. 3.3 Second-order SOLAR Algorithm The previous algorithm only exploits first-order information of the ranking model wt. Inspired by recent studies in second-order online learning [6, 16, 14], we explore second-order algorithms for online learning to rank. Algorithm 1: SOLAR — Scalable Online Learning to Rank 1: Initialize w1 = 0, t = 1 2: for i = 1, 2, . . . , Q do 3: receive a query qi and documents for ranking 4: rank the documents by current model wt 5: acquire user’s feedback in triplet instances 6: for j = 1, . . . , Ti do 7: update wt+1 with (qi t, d1 t, d2 t) and yt by Eqn. (5) (SOLAR-I) or by Eqn.(8) (SOLAR-II) 8: t = t + 1 9: end for 10: end for Figure 1: SOLAR: scalable online learning to rank Specifically, we cast the online learning to ranking problem into a probabilistic framework, in which we model feature confidence for a linear ranking model w with a Gaussian distribution with mean w ∈Rd and covariance Σ ∈Rd×d. The mean vector w is used as the model of the ranking function, and the covariance matrix Σ represents our confidence on the model: the smaller the value of Σp,p, the more confident the learner has over the p-th feature wp of the ranking model w. Following the similar intuition of the above section, we want to optimize our ranking model N(w, Σ) by achieving the following trade-off: (i) to avoid being deviated too much from the previous model N(wt, Σt); (ii) to ensure that it suffers a small loss on current triplet instance; and (iii) to attain a large confidence on the current instance. Similar to [16], we employ the Kullback-Leibler divergence to measure the distance between the current model w to be optimized and the previous model wt, and the regularization terms include both the loss suffered at current triplet instance and the confidence on current triplet instance. Specifically, we formulate the optimization of second-order online learning to rank as: {wt+1, Σt+1} = arg min w,Σ DKL(N(w, Σ)||N(wt, Σt)) +ℓ(w)2 + Ω(Σ) 2γ (7) Ω(Σ) = (φ(qi t, d1 t) −φ(qi t, d2 t))⊤Σ(φ(qi t, d1 t) −φ(qi t, d2 t)) where γ is the trade-off parameter. The following proposition gives the closed-form solution. Proposition 2. This optimization problem in (7) has the following closed-form solution: wt+1 = wt + αtΣtyt(φ(qi t, d1 t) −φ(qi t, d2 t)) (8) Σt+1 = Σt −(1/βt)ΣtAΣt (9) where A, βt, and αt are computed as follows: A = (φ(qi t, d1 t) −φ(qi t, d2 t))(φ(qi t, d1 t) −φ(qi t, d2 t))⊤ βt = (φ(qi t, d1 t) −φ(qi t, d2 t))⊤Σt(φ(qi t, d1 t) −φ(qi t, d2 t)) + γ αt = max(0, 1 −ytwt ⊤(φ(qi t, d1 t) −φ(qi t, d2 t)))/βt 1695 The above can be proved by following [14]. We omit the details. We denote the above algorithm as “SOLAR-II” for short. 4 Theoretical Analysis In this section, we theoretically analyze the two proposed algorithms by proving some online cumulative IR measure bounds for both of them. In order to prove the IR measure bounds for the proposed algorithms, we first need to draw the relationships between the cumulative IR measures and the sum of pairwise squared hinge losses. To this purpose, we introduce the following Lemma. Lemma 4.1. For one query qi and its related documents, the NDCG and MAP is lower bounded by its sum of pairwise squared hinge loss suffered by rank model w. NDCGi ≥1 −γNDCG X t ℓ2(w, (qi t, d1 t, d2 t)) MAPi ≥1 −γMAP X t ℓ2(w, (qi t, d1 t, d2 t)) where γNDCG and γMAP are constant specified by the properties of IR measures: γNDCG = G(K−1)D(1) Nn and γMAP = 1 m, G(K) = 2K − 1,D(K) = 1 log2(1+K), Nn = maxπ Pm r=1 G(l(π(r)))D(r), l(r) is the corresponding labels as K-level ratings, π is rank list, m is the number of relevant documents. Sketch Proof. Using the essential loss idea defined in [11], from Theorem 1 of [11] we could see the essential loss is an upper bound of measure-based ranking errors; besides, the essential loss is the lower bound of the sum of pairwise squared hinge loss, using the properties of squared hinge loss, which is non-negative, nonincreasing and satisfy ℓ2(0) = 1. The above lemma indicates that if we could prove bounds for the online cumulative squared hinge loss compared to the best ranking model with all data beforehand, we could obtain the cumulative IR measures bounds. Fortunately there are strong theoretical loss bounds for the proposed online learning to ranking algorithms. The following shows the theorem of such loss bounds for the proposed SOLAR algorithms. Theorem 1. For the SOLAR-I algorithm with Q queries, for any rank model u, suppose R = maxi,t ∥φ(qi t, d1 t ) −φ(qi t, d2 t ))∥, the cumulative squared hinge loss is bounded by Q X i=1 Ti X t=1 ℓ2 t(wt) ≤(R2 + 1 2C )(∥u∥2 + 2C Q X i=1 Ti X t=1 ℓ2 t(u)) (10) The proof for Theorem 1 can be found in Appendix A. By combining the results of Lemma 1 and Theorem 1, we can easily derive the cumulative IR measure bound of the SOLAR-I algorithm. Theorem 2. For the SOLAR-I algorithm with Q queries, for any ranking model u, the NDCG and MAP performances are respectively bounded by NDCG ≥ 1 −γNDCG Q (R2 + 1 2C )(∥u∥2 + 2C Q X i Ti X t=1 ℓ2 t(u)) MAP ≥ 1 −γMAP Q (R2 + 1 2C )(∥u∥2 + 2C Q X i Ti X t=1 ℓ2 t(u)) The analysis of the SOLAR-II algorithm would be much more complex. Let us denote by M(M = |M|) the set of example indices for which the algorithm makes a mistake, and by U(U = |U|) the set of example indices for which there is an update but not a mistake. Let XA = P (qi t,d1 t ,d2 t )∈M∪U(φ(qi t, d1 t ) − φ(qi t, d2 t ))(φ(qi t, d1 t ) −φ(qi t, d2 t ))T . The theorem below give the squared hinge loss bound. Theorem 3. For the SOLAR-II algorithm with Q queries, Let χt = (φ(qi t, d1 t ) −φ(qi t, d2 t ))T Σt(φ(qi t, d1 t ) −φ(qi t, d2 t )) of examples in M ∪U at time t, K and k is the maximum and minimum value of χt, respectively. ΣT be the final covariance matrix and uT be the final mean vector. For any ranking model u, the squared hinge loss is bounded by Q X i=1 Ti X t=1 ℓ2 t(wt) ≤ K + γ k + γ (a + Q X i=1 Ti X t=1 ℓt(u)) +(K + γ)(log det(Σ−1 T ) − a2 γ2uT Σ−1 T u) where a = p γ∥u∥2 + utXAu r log(det(I + 1 γ XA)) + U The proof for Theorem 3 can be found in Appendix B. Now, by combining the Lemma 1 and Theorem 3, we can derive the cumulative IR measure bound achieved by the proposed SOLAR-II algorithm. Theorem 4. For the SOLAR-II algorithm with Q queries, for any ranking model u, the NDCG and MAP performances are respectively bounded by NDCG ≥ 1 −γNDCG(K + γ) Q(k + γ) (a + Q X i Ti X t=1 ℓt(u)) −γNDCGb Q MAP ≥ 1 −γMAP(K + γ) Q(k + γ) (a + Q X i Ti X t=1 ℓt(u)) −γMAPb Q where b = (K + γ)(log det(Σ−1 T ) − a2 γ2uT Σ−1 T u) The above theorems show that our online algorithm is no much worse than that of the best ranking model u with all data beforehand. 1696 5 Experiments We conduct extensive experiments to evaluate the efficacy of our algorithms in two major aspects: (i) to examine the learning efficacy of the proposed SOLAR algorithms for online learning to rank tasks; (ii) to directly compare the proposed SOLAR algorithms with the state-of-the-art batch learning to rank algorithms. Besides, we also show an application of our algorithms for transfer learning to rank tasks to demonstrate the importance of capturing changing search intention timely in real web applications. The results are in the supplemental file due to space limitation. 5.1 Experimental Testbed and Metrics We adopt the popular benchmark testbed for learning to rank: LETOR1 [31]. To make a comprehensive comparison, we perform experiments on all the available datasets in LETOR3.0 and LETOR4.0. The statistics are shown in Table 1. For performance evaluation metrics, we adopt the standard IR measures, including ”MAP”, ”NDCG@1”, ”NDCG@5”, and ”NDCG@10”. Table 1: LETOR datasets used in the experiments. Dataset #Queries #features avg#Docs/query OHSUMED 106 45 152.26 MQ2007 1692 46 41.14 MQ2008 784 46 19.40 HP2003 150 64 984.04 HP2004 75 64 992.12 NP2003 75 64 991.04 NP2004 75 64 984.45 TD2003 50 64 981.16 TD2004 50 64 988.61 5.2 Evaluation of Online Rank Performance This experiment evaluates the online learning performance of the proposed algorithms for online learning to rank tasks by comparing them with the existing “Prank” algorithm [15], a Perceptronbased pointwise online learning to rank algorithm, and a recently proposed “Committee Perceptron (Com-P)” algorithm [17], which explores the ensemble learning for Perceptron. We evaluate the performance in terms of both online cumulative NDCG and MAP measures. As it is an online learning task, the parameter C of SOLAR-I is fixed to 10−5 and the parameter γ of SOLAR-II is fixed to 104 for all the datasets, as suggested by [17], we set the number of experts in “ComP” to 20. All experiments were conducted over 10 random permutations of each dataset, and all results were averaged over the 10 runs. 1http://research.microsoft.com/en-us/um/beijing/ projects/letor/ Table 2 give the results of NDCG on all the datasets, where the best results were bolded. Several observations can be drawn as follows. First of all, among all the algorithms, we found that both SOLAR-I and SOLAR-II achieve significantly better performance than Prank, which proves the efficacy of the proposed pairwise algorithms. Second, we found that Prank (pointwise) performs extremely poor on several datasets (HP2003, HP2004, NP2003, NP2004, TD2003, TD2004). By looking into the details, we found that it is likely because Prank (pointwise), as a pointwise algorithm, is highly sensitive to the imbalance of training data, and the above datasets are indeed highly imbalanced in which very few documents are labeled as relevant among about 1000 documents per query. By contrast, the pairwise algorithm performs much better. This observation further validates the importance of the proposed pairwise SOLAR algorithms that are insensitive to imbalance issue. Last, by comparing the two SOLAR algorithms, we found SOLAR-II outperforms SOLAR-I in most cases, validating the efficacy of exploiting second-order information. 5.3 Batch v.s. Online Learning 5.3.1 Comparison of ranking performance This experiment aims to directly compare the proposed algorithms with the state-of-the-art batch algorithms in a standard learning to rank setting. We choose four of the most popular and cuttingedge batch algorithms that cover both pairwise and listwise approaches, including RankSVM [20], AdaRank [39], RankBoost [18], and ListNet [5]. For comparison, we follow the standard setting: each dataset is divided into 3 parts: 60% for training, 20% for validation to select the best parameters, and 20% for testing. We use the training data to learn the ranking model by the proposed SOLAR algorithms, the validation data to select the best parameters, and use the test data to evaluate performance. For SOLAR-I, we choose the best parameter C from [10−3.5, 10−6.5] via grid search on the validation set; and similarly for SOLAR-II, we choose the best parameter γ from [103, 106]. Following [31], we adopt 5 division versions of all the datasets, and report the average performance. The results are shown in Table 3, where the best performances were bolded2. Several observations can drawn from the results. 2Results of the baseline algorithms are taken from LETOR. 1697 Table 2: Evaluation of NDCG performance of online learning to rank algorithms. Algorithm OHSUMED MQ2007 MQ2008 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 Prank(Pointwise) 0.2689 0.2253 0.2221 0.2439 0.2748 0.3039 0.2369 0.3352 0.4036 Prank(Pairwise) 0.4456 0.3953 0.3904 0.2777 0.3010 0.3294 0.2834 0.3823 0.4403 Com-P 0.4327 0.3993 0.3934 0.3640 0.3828 0.4135 0.3378 0.4415 0.4885 SOLAR-I 0.5060 0.4479 0.4337 0.3760 0.3973 0.4271 0.3490 0.4584 0.5022 SOLAR-II 0.5352 0.4635 0.4461 0.3897 0.4095 0.4383 0.3594 0.4680 0.5107 Algorithm HP2003 HP2004 NP2003 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 Prank(Pointwise) 0.0033 0.0047 0.0050 0.0053 0.0083 0.0088 0.0033 0.0051 0.0075 Prank(Pairwise) 0.5267 0.6491 0.6745 0.5107 0.6438 0.6717 0.4033 0.5926 0.6255 Com-P 0.6487 0.7744 0.7884 0.5640 0.7163 0.7392 0.5227 0.7146 0.7417 SOLAR-I 0.6993 0.7796 0.7917 0.5347 0.7072 0.7335 0.5527 0.7486 0.7792 SOLAR-II 0.7020 0.7959 0.8079 0.5413 0.7146 0.7419 0.5693 0.7621 0.7895 Algorithm NP2004 TD2003 TD2004 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 Prank(Pointwise) 0.0080 0.0100 0.0100 0.0040 0.0063 0.0056 0.0040 0.0018 0.0025 Prank(Pairwise) 0.4213 0.6039 0.6290 0.1920 0.1707 0.1737 0.2773 0.2235 0.2071 Com-P 0.4867 0.6989 0.7226 0.3300 0.2717 0.2635 0.3427 0.2988 0.2794 SOLAR-I 0.5613 0.7649 0.7869 0.2160 0.2968 0.2916 0.2533 0.2750 0.2625 SOLAR-II 0.5627 0.7667 0.7858 0.2960 0.3251 0.3245 0.2893 0.2874 0.2806 0 20 40 60 80 100 120 0.3 0.32 0.34 0.36 0.38 0.4 0.42 0.44 0.46 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 200 400 600 800 1000 1200 1400 1600 0.4 0.42 0.44 0.46 0.48 0.5 0.52 0.54 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 50 100 150 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 10 20 30 40 50 60 70 80 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II OSHUMED 2007MQ 2003HP 2004HP 0 50 100 150 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 10 20 30 40 50 60 70 80 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 10 20 30 40 50 0 0.05 0.1 0.15 0.2 0.25 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 0 10 20 30 40 50 60 70 80 0 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Number of samples Online cumulative MAP Prank(Pointwise) Prank(Pairwise) Com−P SOLAR−I SOLAR−II 2003NP 2004NP 2003TD 2004TD Figure 2: Evaluation of MAP performances of Online Learning to Rank algorithms First of all, we found that no single algorithm beats all the others on all the datasets. Second, on all the datasets, we found that the SOLAR algorithms are generally achieve comparable to the state-of-the-art batch algorithms. On some datasets, e.g., ”MQ2008”, ”MQ2007” ”HP2003”, ”TD2003”, the proposed online algorithms can even achieve best performances in terms of MAP. This encouraging result proves the efficacy of the proposed algorithms as an efficient and scalable online solution to train ranking models. Second, among the two proposed online algorithms, SOLAR-II still outperforms SOLAR-I in most cases, which again shows the importance of exploiting second-order information. 5.3.2 Scalability Evaluation This experiment aims to examine the scalability of the proposed SOLAR algorithms. We com0 100 200 300 400 500 600 700 800 10 −1 10 0 10 1 10 2 10 3 10 4 Number of queries received Time cost SOLAR−I SOLAR−II RankSVM Figure 3: Scalability Evaluation on “2008MQ” pare it with RankSVM [20], a widely used and efficient batch algorithm. For implementation, we adopt the code from [9] 3, which is known to be the fastest implementation. Figure 3 illustrates the scalability evaluation on “2008MQ” dataset. From the results, we observe that SOLAR is much faster (e.g., 100+ times faster on this dataset)and significantly more scalable than RankSVM. 3http://olivier.chapelle.cc/primal/ 1698 Table 3: Evaluation of NDCG of Online vs Batch Learning to Rank algorithms. Algorithm OHSUMED MQ2007 MQ2008 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 RankSVM 0.4958 0.4164 0.4140 0.4096 0.4142 0.4438 0.3626 0.4695 0.2279 AdaRank-NDCG 0.5330 0.4673 0.4496 0.3876 0.4102 0.4369 0.3826 0.4821 0.2307 RankBoost 0.4632 0.4494 0.4302 0.4134 0.4183 0.4464 0.3856 0.4666 0.2255 ListNet 0.5326 0.4432 0.4410 0.4002 0.4170 0.4440 0.3754 0.4747 0.2303 SOLAR-I 0.5111 0.4668 0.4497 0.3886 0.4101 0.4361 0.3677 0.4634 0.5086 SOLAR-II 0.5397 0.4690 0.4490 0.4104 0.4149 0.4435 0.3720 0.4771 0.5171 Algorithm HP2003 HP2004 NP2003 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 RankSVM 0.6933 0.7954 0.8077 0.5733 0.7512 0.7687 0.5800 0.7823 0.8003 AdaRank-NDCG 0.7133 0.8006 0.8050 0.5867 0.7920 0.8057 0.5600 0.7447 0.7672 RankBoost 0.6667 0.8034 0.8171 0.5067 0.7211 0.7428 0.6000 0.7818 0.8068 ListNet 0.7200 0.8298 0.8372 0.6000 0.7694 0.7845 0.5667 0.7843 0.8018 SOLAR-I 0.7067 0.8036 0.8056 0.5467 0.7325 0.7544 0.5800 0.7664 0.7935 SOLAR-II 0.7000 0.8068 0.8137 0.5733 0.7394 0.7640 0.5667 0.7691 0.7917 Algorithm NP2004 TD2003 TD2004 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 NDCG@1 NDCG@5 NDCG@10 RankSVM 0.5067 0.7957 0.8062 0.3200 0.3621 0.3461 0.4133 0.3240 0.3078 AdaRank-NDCG 0.5067 0.7122 0.7384 0.3600 0.2939 0.3036 0.4267 0.3514 0.3163 RankBoost 0.4267 0.6512 0.6914 0.2800 0.3149 0.3122 0.5067 0.3878 0.3504 ListNet 0.5333 0.7965 0.8128 0.4000 0.3393 0.3484 0.3600 0.3325 0.3175 SOLAR-I 0.5733 0.7814 0.7976 0.2600 0.3060 0.3071 0.3600 0.3119 0.3049 SOLAR-II 0.5733 0.7830 0.8013 0.3000 0.3652 0.3462 0.3333 0.3167 0.3056 6 Conclusions and Future Work This paper presented SOLAR — a new framework of Scalable Online Learning Algorithms for Ranking. SOLAR overcomes the limitations of traditional batch learning to rank for real-world online applications. Our empirical results concluded that SOLAR algorithms share competitive efficacy as the state-of-the-art batch algorithms, but enjoy salient properties which are critical to many applications. Our future work include (i) extending our techniques to the framework of listwise learning to rank; (ii) modifying the framework to handle learning to ranking with ties; and (iii) conducting more in-depth analysis and comparisons to other types of online learning to rank algorithms in diverse settings, e.g., partial feedback [41, 22]. Appendix Proof of Theorem 1 Proof. Let ∆t = ∥wt −u∥2 −∥wt+1 −u∥2, then T X t=1 ∆t = ∥u∥2 −∥wT +1 −u∥2 ≤∥u∥2 Further, ∆t can be expressed as: ∆t = −2λtyt(wt −u) · (φ(qi t, d1 t) −φ(qi t, d2 t)) −λt∥φ(qi t, d1 t) −φ(qi t, d2 t))∥2 ≥ λt(2ℓt(wt) −λt −2ℓt(u)). We thus have ∥u∥2 ≥ T X t=1 (2λtℓt(wt) −λ2 t ∥φ(qi t, d1 t ) −φ(qi t, d2 t ))∥2 −2λtℓt(u)) ≥ T X t=1 (2λtℓt(wt) −λ2 t ∥φ(qi t, d1 t ) −φ(qi t, d2 t ))∥2 −2λtℓt(u) −( λt √ 2C − √ 2Cℓt(u))2) ≥ T X t=1 (2λtℓt(wt) −λ2 t (∥φ(qi t, d1 t ) −φ(qi t, d2 t )∥2 + 1 2C ) −2Cℓt(u)2) = T X t=1 ( ℓt(wt)2 ∥φ(qi t, d1 t ) −φ(qi t, d2 t ))∥2 + 1 2C −2Cℓt(u)2) Combining the above concludes the theorem. Appendix B: Proof of Theorem 3 Proof. Using the Cauchy-Schwarz inequality, we have uT T Σ−1 T uT ≥(uT Σ−1 T uT )2 uT Σ−1 T u . Notice that some inequalities could be easily obtained by extending the Lemma3, Lemma 4 and Theorem 2 of [14] to the pairwise setting as follows: uT Σ−1 T uT ≥M + U −P t∈M∪U ℓt(u) γ , X t∈M∪U χt r(χt + γ) ≤log(det(Σ−1 T )) uT T Σ−1 T uT = X t∈M∪U χt r(χt + γ) + X t∈M∪U 1 −ℓ2 t(wt) χt + γ , M + U ≤a + X t∈M∪U ℓt(u) where a = p γ∥u∥2 + utXAu r log(det(I + 1 γ XA)) + U. We thus have X t∈M∪U ℓ2 t (wt) χt + γ ≤ X t∈M∪U χt r(χt + γ) + X t∈M∪U 1 χt + γ − (M + U −P t∈M∪U ℓt(u))2 r2uT Σ−1 T u ≤log(det(Σ−1 T )) + X t∈M∪U 1 χt + γ − a2 r2uT Σ−1 T u ≤log(det(Σ−1 T )) − a2 r2uT Σ−1 T u + M + U k + γ ≤log(det(Σ−1 T )) − a2 r2uT Σ−1 T u + a + P t∈M∪U ℓt(u) k + γ Combining the above, we achieve the final result: Q X i=1 Ti X t=1 ℓ2 t(wt) ≤ K + γ k + γ (a + Q X i=1 Ti X t=1 ℓt(u)) +(K + γ)(log det(Σ−1 T ) − a2 γ2uT Σ−1 T u) 1699 Acknowledgments This work was supported by Singapore MOE tier 1 research grant (C220/MSS14C003) and the National Nature Science Foundation of China (61428207). References [1] R. A. Baeza-Yates and B. A. Ribeiro-Neto. Modern Information Retrieval - the concepts and technology behind search, Second edition. Pearson Education Ltd., Harlow, England, 2011. [2] C. J. C. Burges, R. Ragno, and Q. V. Le. Learning to rank with nonsmooth cost functions. In NIPS, pages 193–200, 2006. [3] C. J. C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. N. Hullender. Learning to rank using gradient descent. In ICML, pages 89–96, 2005. [4] Y. Cao, J. Xu, T.-Y. Liu, H. Li, Y. Huang, and H.-W. Hon. Adapting ranking svm to document retrieval. In SIGIR, pages 186–193, 2006. [5] Z. Cao, T. Qin, T.-Y. Liu, M.-F. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise approach. In ICML, pages 129–136, 2007. [6] N. Cesa-Bianchi, A. Conconi, and C. Gentile. A second-order perceptron algorithm. SIAM J. Comput., 34(3):640–668, 2005. [7] O. Chapelle and Y. Chang. Yahoo! learning to rank challenge overview. In Yahoo! Learning to Rank Challenge, pages 1–24, 2011. [8] O. Chapelle, Y. Chang, and T.-Y. Liu. Future directions in learning to rank. Journal of Machine Learning Research - Proceedings Track, 14:91–100, 2011. [9] O. Chapelle and S. S. Keerthi. Efficient algorithms for ranking with svms. Inf. Retr., 13(3):201–215, 2010. [10] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through ranking. J. Mach. Learn. Res., 11:1109–1135, Mar. 2010. [11] W. Chen, T.-Y. Liu, Y. Lan, Z. Ma, and H. Li. Ranking measures and loss functions in learning to rank. In NIPS, pages 315–323, 2009. [12] W. S. Cooper, F. C. Gey, and D. P. Dabney. Probabilistic retrieval based on staged logistic regression. In SIGIR’98, pages 198–210. ACM, 1992. [13] K. Crammer, O. Dekel, J. Keshet, S. ShalevShwartz, and Y. Singer. Online passiveaggressive algorithms. Journal of Machine Learning Research, 7:551–585, 2006. [14] K. Crammer, A. Kulesza, and M. Dredze. Adaptive regularization of weight vectors. In NIPS, pages 414–422, 2009. [15] K. Crammer and Y. Singer. Pranking with ranking. In NIPS, pages 641–647, 2001. [16] M. Dredze, K. Crammer, and F. Pereira. Confidence-weighted linear classification. In ICML, pages 264–271, 2008. [17] J. L. Elsas, V. R. Carvalho, and J. G. Carbonell. Fast learning of document ranking functions with the committee perceptron. In WSDM, pages 55–64, 2008. [18] Y. Freund, R. D. Iyer, R. E. Schapire, and Y. Singer. An efficient boosting algorithm for combining preferences. Journal of Machine Learning Research, 4:933–969, 2003. [19] F. C. Gey. Inferring probability of relevance using the method of logistic regression. In In Proceedings of ACM SIGIR’94, pages 222– 231. Springer-Verlag, 1994. [20] R. Herbrich, T. Graepel, and K. Obermayer. Large margin rank boundaries for ordinal regression. In Advances in Large Margin Classifiers, pages 115–132, 2000. [21] K. Hofmann. Fast and reliable online learning to rank for information retrieval. Phd thesis, University of Amsterdam, Amsterdam, 05/2013 2013. [22] K. Hofmann, A. Schuth, S. Whiteson, and M. de Rijke. Reusing historical interaction data for faster online learning to rank for ir. In Proceedings of the sixth ACM international conference on Web search and data mining, WSDM, pages 183–192, Rome, Italy, 2013. [23] K. Hofmann, S. Whiteson, and M. Rijke. Balancing exploration and exploitation in listwise and pairwise online learning to rank for information retrieval. Inf. Retr., 16(1):63– 90, Feb. 2013. 1700 [24] S. C. Hoi, J. Wang, and P. Zhao. Libol: A library for online learning algorithms. The Journal of Machine Learning Research, 15(1):495–499, 2014. [25] K. J¨arvelin and J. Kek¨al¨ainen. Ir evaluation methods for retrieving highly relevant documents. In SIGIR, pages 41–48, 2000. [26] T. Joachims. Optimizing search engines using clickthrough data. In KDD, pages 133– 142, 2002. [27] H. Li. Learning to rank for information retrieval and natural language processing. Synthesis Lectures on Human Language Technologies, 7(3):1–121, 2014. [28] P. Li, C. J. C. Burges, and Q. Wu. Mcrank: Learning to rank using multiple classification and gradient boosting. In NIPS, 2007. [29] T.-Y. Liu. Learning to Rank for Information Retrieval. Springer, 2011. [30] R. Nallapati. Discriminative models for information retrieval. In SIGIR’04, pages 64– 71, Sheffield, United Kingdom, 2004. [31] T. Qin, T.-Y. Liu, J. Xu, and H. Li. Letor: A benchmark collection for research on learning to rank for information retrieval. Inf. Retr., 13(4):346–374, 2010. [32] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psych. Rev., 7:551–585, 1958. [33] M. Surdeanu, M. Ciaramita, and H. Zaragoza. Learning to rank answers on large online qa collections. In ACL, pages 719–727, 2008. [34] M. Taylor, J. Guiver, S. Robertson, and T. Minka. Softrank: optimizing non-smooth rank metrics. In Proceedings of the international conference on Web search and web data mining, WSDM, pages 77–86, Palo Alto, California, USA, 2008. ACM. [35] M.-F. Tsai, T.-Y. Liu, T. Qin, H.-H. Chen, and W.-Y. Ma. Frank: a ranking method with fidelity loss. In SIGIR’07, pages 383–390, Amsterdam, The Netherlands, 2007. [36] E. Tsivtsivadze, K. Hoffman, and T. Heskes. Large scale co-regularized ranking. In J. F¨urnkranz and E. H¨ullermeier, editors, ECAI Workshop on Preference Learning, 2012. [37] H. Valizadegan, R. Jin, R. Zhang, and J. Mao. Learning to rank by optimizing ndcg measure. In NIPS, pages 1883–1891, 2009. [38] F. Xia, T.-Y. Liu, J. Wang, W. Zhang, and H. Li. Listwise approach to learning to rank: theory and algorithm. In ICML’08, pages 1192–1199, Helsinki, Finland, 2008. [39] J. Xu and H. Li. Adarank: a boosting algorithm for information retrieval. In SIGIR, pages 391–398, 2007. [40] J. Xu, T.-Y. Liu, M. Lu, H. Li, and W.-Y. Ma. Directly optimizing evaluation measures in learning to rank. In SIGIR’08, pages 107– 114, Singapore, Singapore, 2008. ACM. [41] Y. Yue, J. Broder, R. Kleinberg, and T. Joachims. The k-armed dueling bandits problem. J. Comput. Syst. Sci., 78(5):1538– 1556, 2012. [42] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In SIGIR’07, pages 271–278, Amsterdam, The Netherlands, 2007. ACM. [43] Z. Zheng, K. Chen, G. Sun, and H. Zha. A regression framework for learning ranking functions using relative relevance judgments. In SIGIR’07, pages 287–294, Amsterdam, The Netherlands, 2007. 1701
2015
163
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1702–1712, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Text Categorization as a Graph Classification Problem François Rousseau Emmanouil Kiagias LIX, École Polytechnique, France Michalis Vazirgiannis Abstract In this paper, we consider the task of text categorization as a graph classification problem. By representing textual documents as graph-of-words instead of historical n-gram bag-of-words, we extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Moreover, by capitalizing on the concept of k-core, we reduce the graph representation to its densest part – its main core – speeding up the feature extraction step for little to no cost in prediction performances. Experiments on four standard text classification datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baseline approaches. 1 Introduction The task of text categorization finds applications in a wide variety of domains, from news filtering and document organization to opinion mining and spam detection. With the ever-growing quantity of information available online nowadays, it is crucial to provide effective systems capable of classifying text in a timely fashion. Compared to other application domains of classification, its specificity lies in its high number of features, its sparse feature vectors and its skewed multiclass scenario. For instance, when dealing with thousands of news articles, it is not uncommon to have millions of n-gram features, only a few hundreds actually present in each document and tens of class labels – some of them with thousands of articles and some others will only a few hundreds. These particularities have to be taken into account when envisaging a different representation for a document and in our case when considering the task as a graph classification problem. Graphs are powerful data structures that are used to represent complex information about entities and interaction between them and we think text makes no exception. Historically, following the traditional bag-of-words representation, unigrams have been considered as the natural features and later extended to n-grams to capture some word dependency and word order. However, ngrams correspond to sequences of words and thus fail to capture word inversion and subset matching (e. g., “article about news” vs. “news article”). We believe graphs can help solve these issues like they did for instance with chemical compounds where repeating substructure patterns are good indicators of belonging to one particular class, e. g., predicting carcinogenicity in molecules (Helma et al., 2001). Graph classification has received a lot of attention this past decade and various techniques have been developed to deal with the task but rarely applied on textual data and at its scale. In our work, we explored a graph representation of text, namely graph-of-words, to challenge the traditional bag-of-words representation and help better classify textual documents into categories. We first trained a classifier using frequent subgraphs as features for increased effectiveness. We then reduced each graph-of-words to its main core before mining the features for increased efficiency. Finally, we also used this technique to reduce the total number of n-gram features considered in the baselines for little to no loss in prediction performances. The rest of the paper is organized as follows. Section 2 provides a review of the related work. Section 3 defines the preliminary concepts upon which our work is built. Section 4 introduces the proposed approaches. Section 5 describes the experimental settings and presents the results we obtained on four standard datasets. Finally, Section 6 concludes our paper and mentions future work directions. 1702 2 Related work In this section, we present the related work in text categorization, graph classification and the combination of the two fields like in our case. 2.1 Text categorization Text categorization, a.k.a. text classification, corresponds to the task of automatically predicting the class label of a given textual document. We refer to (Sebastiani, 2002) for an in-depth review of the earliest works in the field and (Aggarwal and Zhai, 2012) for a survey of the more recent works that capitalize on additional metainformation. We note in particular the seminal work of Joachims (1998) who was the first to propose the use of a linear SVM with TF×IDF term features for the task. This approach is one of the standard baselines because of its simplicity yet effectiveness (unsupervised n-gram feature mining followed by standard supervised learning). Another popular approach is the use of Naive Bayes and its multiple variants (McCallum and Nigam, 1998), in particular for the subtask of spam detection (Androutsopoulos et al., 2000). Finally, there are a couple of works such as (Hassan et al., 2007) that used the graph-of-words representation to propose alternative weights for the n-gram features but still without considering the task as a graph classification problem. 2.2 Graph classification Graph classification corresponds to the task of automatically predicting the class label of a given graph. The learning part in itself does not differ from other supervised learning problems and most proposed methods deal with the feature extraction part. They fall into two main categories: approaches that consider subgraphs as features and graph kernels. 2.2.1 Subgraphs as features The main idea is to mine frequent subgraphs and use them as features for classification, be it with Adaboost (Kudo et al., 2004) or a linear SVM (Deshpande et al., 2005). Indeed, most datasets that were used in the associated experiments correspond to chemical compounds where repeating substructure patterns are good indicators of belonging to one particular class. Some popular graph pattern mining algorithms are gSpan (Yan and Han, 2002), FFSM (Huan et al., 2003) and Gaston (Nijssen and Kok, 2004). The number of frequent subgraphs can be enormous, especially for large graph collections, and handling such a feature set can be very expensive. To overcome this issue, recent works have proposed to retain or even only mine the discriminative subgraphs, i. e. features that contribute to the classification decision, in particular gBoost (Saigo et al., 2009), CORK (Thoma et al., 2009) and GAIA (Jin et al., 2010). However, when experimenting, gBoost did not converge on our larger datasets while GAIA and CORK consider subgraphs of node size at least 2, which exclude unigrams, resulting in poorer performances. Moreover, all these approaches have been developed for binary classification, which meant mining features as many times as the number of classes instead of just once (one-vs-all learning strategy). In this paper, we tackle the scalability issue differently through an unsupervised feature selection approach to reduce the size of the graphs and a fortiori the number of frequent subgraphs. 2.2.2 Graph kernels Gärtner et al. (2003) proposed the first kernels between graphs (as opposed to previous kernels on graphs, i. e. between nodes) based on either random walks or cycles to tackle the problem of classification between graphs. In parallel, the idea of marginalized kernels was extended to graphs by Kashima et al. (2003) and by Mahé et al. (2004). We refer to (Vishwanathan et al., 2010) for an indepth review of the topic and in particular its limitations in terms of number of unique node labels, which make them unsuitable for our problem as tested in practice (limited to a few tens of unique labels compared to hundreds of thousands for us). 2.3 Similar works The work of Markov et al. (2007) is perhaps the closest to ours since they also perform subgraph feature mining on graph-of-words representations but with non-standard datasets and baselines. The works of Jiang et al. (2010) and Arora et al. (2010) are also related but their representations are different and closer to parse and dependency trees used as base features for text categorization by Kudo and Matsumoto (2004) and Matsumoto et al. (2005). Moreover, they do not discuss the choice of the support value, which controls the total number of features and can potentially lead to millions of subgraphs on standard datasets. 1703 3 Preliminary concepts In this section, we introduce the preliminary concepts upon which our work is built. 3.1 Graph-of-words We model a textual document as a graph-of-words, which corresponds to a graph whose vertices represent unique terms of the document and whose edges represent co-occurrences between the terms within a fixed-size sliding window. The underlying assumption is that all the words present in a document have some undirected relationships with the others, modulo a window size outside of which the relationship is not considered. This representation was first used in keyword extraction and summarization (Ohsawa et al., 1998; Mihalcea and Tarau, 2004) and more recently in ad hoc IR (Blanco and Lioma, 2012; Rousseau and Vazirgiannis, 2013). We refer to (Blanco and Lioma, 2012) for an in-depth review of the graph representations of text in NLP. system softwar implement disciplin scienc span topic theoret studi rang limit algorithm issu practic hardwar comput As a discipline, computer science spans a range of topics from theoretical studies of algorithms and the limits of computation to the practical issues of implementing computing systems in hardware and software. hide me Figure 1: Graph-of-words representation of a textual document – in bold font, its main core. Figure 1 illustrates the graph-of-words representation of a textual document. The vertices correspond to the remaining terms after standard preprocessing steps have been applied (tokenization, stop word removal and stemming). The undirected edges were drawn between terms co-occurring within a sliding window over the processed text of size 4, value consistently reported as working well in the references aforementioned and validated in our experiments. Edge direction was used by Filippova (2010) so as to extract valid sentences but not here in order to capture some word inversion. Note that for small-enough window sizes (which is typically the case in practice), we can consider that two terms linked represent a longdistance bigram (Bassiou and Kotropoulos, 2010), if not a bigram. Furthermore, by extending the denomination, we can consider that a subgraph of size n is a long-distance n-gram, if not an ngram. Indeed, the nodes belonging to a subgraph do not necessarily appear in a sequence in the document like for a n-gram. Moreover, this enables us to “merge” together n-grams that share the same terms but maybe not in the same order. In the experiments, by abusing the terminology, we will refer to them as n-grams to adopt a common terminology with the baseline approaches. 3.2 Node/edge labels and subgraph matching In graph classification, it is common to introduce a node labeling function µ to map a node id to its label. For instance, consider the case of chemical compounds (e. g., the benzene C6H6). Then in its graph representation (its “structural formula”), it is crucial to differentiate between the multiple nodes labeled the same (e. g., C or H). In the case of graph-of-words, node labels are unique inside a graph since they represent unique terms of the document and we can therefore omit these functions since they are injective in our case and we can substitute node ids for node labels. In particular, the general problem of subgraph matching, which defines an isomorphism between a graph and a subgraph and is NP-complete (Garey and Johnson, 1990), can be reduced to a polynomial problem when node labels are unique. In our experiments, we used the standard algorithm VF2 developed by Cordella et al. (2001). 3.3 K-core and main core Seidman (1983) defined the k-core of a graph as the maximal connected subgraph whose vertices are at least of degree k within the subgraph. The non-empty k-core of largest k is called the main core and corresponds to the most cohesive set(s) of vertices. The corresponding value of k may differ from one graph to another. Batagelj and Zaveršnik (2003) proposed an algorithm to extract the main core of an unweighted graph in time linear in the number of edges, complexity similar in our case to the other NLP preprocessing steps. Bold font on Figure 1 indicates that a vertex belongs to the main core of the graph. 1704 4 Graph-of-words classification In this section, we present our work and the several approaches we explored, from unsupervised feature mining using gSpan to propose more discriminative features than standard n-grams to unsupervised feature selection using k-core to reduce the total number of subgraph and n-gram features. 4.1 Unsupervised feature mining using gSpan We considered the task of text categorization as a graph classification problem by representing textual documents as graph-of-words and then extracting subgraph features to train a graph classifier. Each document is a separate graph-of-words and the collection of documents thus corresponds to a set of graphs. Therefore, for larger datasets, the total number of graphs increases but not the average graph size (the average number of unique terms in a text), assuming homogeneous datasets. Because the total number of unique node labels corresponds to the number of unique terms in the collection in our case, graph kernels are not suitable for us as verified in practice using the MATLAB code made available by Shervashidze (2009). We therefore only explored the methods that consider subgraphs as features. Repeating substructure patterns between graphs are intuitively good candidates for classification since, at least for chemical compounds, shared subparts of molecules are good indicators of belonging to one particular class. We assumed it would the same for text. Indeed, subgraphs of graph-of-words correspond to sets of words co-occurring together, just not necessarily always as the same sequence like for n-grams – it can be seen as a relaxed definition of a n-gram to capture additional variants. We used gSpan (graph-based Substructure pattern (Yan and Han, 2002)) as frequent subgraph miner like (Jiang et al., 2010; Arora et al., 2010) mostly because of its fast available C++ implementation from gBoost (Saigo et al., 2009). Briefly, the key idea behind gSpan is that instead of enumerating all the subgraphs and testing for isomorphism throughout the collection, it first builds for each graph a lexicographic order of all the edges using depth-first-search (DFS) traversal and assigns to it a unique minimum DFS code. Based on all these DFS codes, a hierarchical search tree is constructed at the collection-level. By pre-order traversal of this tree, gSpan discovers all frequent subgraphs with required support. Consider the set of all subgraphs in the collection of graphs, which corresponds to the set of all potential features. Note that there may be overlapping (subgraphs sharing nodes/edges) and redundant (subgraphs included in others) features. Because its size is exponential in the number of edges (just like the number of n-grams is exponential in n), it is common to only retain/mine the most frequent subgraphs (again just like for n-grams with a minimum document frequency (Fürnkranz, 1998; Joachims, 1998)). This is controlled via a parameter known as the support, which sets the minimum number of graphs in which a given subgraph has to appear to be considered as a feature, i. e. the number of subgraph matches in the collection. Here, since node labels are unique inside a graph, we do not have to consider multiple occurrences of the same subgraph in a given graph. The lower the support, the more features selected/considered but the more expensive the mining and the training (not only in time spent for the learning but also for the feature vector generation). 4.2 Unsupervised support selection The optimal value for the support can be learned through cross-validation so as to maximize the prediction accuracy of the subsequent classifier, making the whole feature mining process supervised. But if we consider that the classifier can only improve its goodness of fit with more features (the sets of features being nested as the support varies), it is likely that the lowest support will lead to the best test accuracy; assuming subsequent regularization to prevent overfitting. However, this will come at the cost of an exponential number of features as observed in practice. Indeed, as the support decreases, the number of features increases slightly up until a point where it increases exponentially, which makes both the feature vector generation and the learning expensive, especially with multiple classes. Moreover, we observed that the prediction performances did not benefit that much from using all the possible features (support of 1) as opposed to a more manageable number of features corresponding to a higher support. Therefore, we propose to select the support using the so-called elbow method. This is an unsupervised empirical method initially developed for selecting the number of clusters in k-means (Thorndike, 1953). Figure 3 (upper plots) in Section 5 illustrates this process. 1705 4.3 Considered classifiers In text categorization, standard baseline classifiers include k-nearest neighbors (kNN) (Larkey and Croft, 1996), Naive Bayes (NB) (McCallum and Nigam, 1998) and linear Support Vector Machines (SVM) (Joachims, 1998) with the latter performing the best on n-gram features as verified in our experiments. Since our subgraph features correspond to “long-distance n-grams”, we used linear SVMs as our classifiers in all our experiments – the goal of our work being to explore and propose better features rather than a different classifier. 4.4 Multiclass scenario In standard binary graph classification (e. g., predicting chemical compounds’ carcinogenicity as either positive or negative (Helma et al., 2001)), feature mining is performed on the whole graph collection as we expect the mined features to be able to discriminate between the two classes (thus producing a good classifier). However, for the task of text categorization, there are usually more than two classes (e. g., 118 categories of news articles for the Reuters-21578 dataset) and with a skewed class distribution (e. g., a lot more news related to “acquisition” than to “grain”). Therefore, a single support value might lead to some classes generating a tremendous number of features (e. g., hundreds of thousands of frequent subgraphs) and some others only a few (e. g., a few hundreds subgraphs) resulting in a skewed and non-discriminative feature set. To include discriminative features for these minority classes, we would need an extremely low support resulting in an exponential number of features because of the majority classes. For these reasons, we decided to mine frequent subgraphs per class using the same relative support (%) and then aggregating each feature set into a global one at the cost of a supervised process (but which still avoids crossvalidated parameter tuning). This was not needed for the tasks of spam detection and opinion mining since the corresponding datasets consist of only two balanced classes. 4.5 Main core mining using gSpan Since the main drawback of mining frequent subgraphs for text categorization rather than chemical compound classification is the very high number of possible subgraphs because of the size of the graphs and the total number of graphs (more than 10x in both cases), we thought of ways to reduce the graphs’ sizes while retaining as much classification information as possible. The graph-of-words representation is designed to capture dependency between words, i. e. dependency between features in the context of machine learning but at the document-level. Initially, we wanted to capture recurring sets of words (i. e. take into account word inversion and subset matching) and not just sequences of words like with n-grams. In terms of subgraphs, this means words that co-occur with each other and form a dense subgraph as opposed to a path like for a ngram. Therefore, when reducing the graphs, we need to keep their densest part(s) and that is why we considered extracting their main cores. Compared to other density-based algorithms, retaining the main core of a graph has the advantage of being linear in the number of edges, i. e. in the number of unique terms in a document in our case (the number of edges is at most the number of nodes times the fixed size of the sliding window, a small constant in practice). 4.6 Unsupervised n-gram feature selection Similarly to (Hassan et al., 2007) that used graphof-words to propose alternative weights for the ngram features, we can capitalize on main core retention to still extract binary n-gram features for classification but considering only the terms belonging to the main core of each document. Because some terms never belong to any main core of any document, the dimension of the overall feature space decreases. Additionally, since a document is only represented by a subset of its original terms, the number of non-zero feature values per document also decreases, which matters for SVM, even for the linear kernel, when considering the dual formulation or in the primal with more recent optimization techniques (Joachims, 2006). Compared to most existing feature selection techniques in the field (Yang and Pedersen, 1997), it is unsupervised and corpus-independent as it does not rely on any labeled data like IG, MI or χ2 nor any collection-wide statistics like IDF, which can be of interest for large-scale text categorization in order to process documents in parallel, independently of each other. In some sense, it is similar to what Özgür et al. (2005) proposed with corpus-based and class-based keyword selection for text classification except that we use here document-based keyword selection following the approach from Rousseau and Vazirgiannis (2015). 1706 5 Experiments In this section we present the experiments we conducted to validate our approaches. 5.1 Datasets We used four standard text datasets: two for multiclass document categorization (WebKB and R8), one for spam detection (LingSpam) and one for opinion mining (Amazon) so as to cover all the main subtasks of text categorization: • WebKB: 4 most frequent categories among labeled webpages from various CS departments – split into 2,803 for training and 1,396 for test (Cardoso-Cachopo, 2007, p. 39–41). • R8: 8 most frequent categories of Reuters21578, a set of labeled news articles from the 1987 Reuters newswire – split into 5,485 for training and 2,189 for test (Debole and Sebastiani, 2005). • LingSpam: 2,893 emails classified as spam or legitimate messages – split into 10 sets for 10-fold cross validation (Androutsopoulos et al., 2000). • Amazon: 8,000 product reviews over four different sub-collections (books, DVDs, electronics and kitchen appliances) classified as positive or negative – split into 1,600 for training and 400 for test each (Blitzer et al., 2007). 5.2 Implementation We developed our approaches mostly in Python using the igraph library (Csardi and Nepusz, 2006) for the graph representation and main core extraction. For unsupervised subgraph feature mining, we used the C++ implementation of gSpan from gBoost (Saigo et al., 2009). Finally for classification and standard n-gram text categorization we used scikit (Pedregosa et al., 2011), a standard Python machine learning library. 5.3 Evaluation metrics To evaluate the performance of our proposed approaches over standard baselines, we computed on the test set both the micro- and macro-average F1score. Because we are dealing with single-label classification, the micro-average F1-score corresponds to the accuracy and is a measure of the overall prediction effectiveness (Manning et al., Dataset # subgraphs before # subgraphs after reduction WebKB 30,868 10,113 67 % R8 39,428 11,373 71 % LingSpam 54,779 15,514 72 % Amazon 16,415 8,745 47 % Dataset # n-grams before # n-grams after reduction WebKB 1,849,848 735,447 60 % R8 1,604,280 788,465 51 % LingSpam 2,733,043 1,016,061 63 % Amazon 583,457 376,664 35 % Table 1: Total number of features (n-grams or subgraphs) vs. number of features present only in main cores along with the reduction of the dimension of the feature space on all four datasets. 2008, p. 281). Conversely, the macro-average F1score takes into account the skewed class label distributions by weighting each class uniformly. The statistical significance of improvement in accuracy over the n-gram SVM baseline was assessed using the micro sign test (p < 0.05) (Yang and Liu, 1999). For the Amazon dataset, we report the average of each metric over the four sub-collections. 5.4 Results Table 2 shows the results on the four considered datasets. The first three rows correspond to the baselines: unsupervised n-gram feature extraction and then supervised learning using kNN, NB (Multinomial but Bernoulli yields similar results) and linear SVM. The last three rows correspond to our approaches. In our first approach, denoted as “gSpan + SVM”, we mine frequent subgraphs (gSpan) as features and then train a linear SVM. These features correspond to long-distance n-grams. This leads to the best results in text categorization on almost all datasets (all if we compare to baseline methods), in particular on multiclass document categorization (R8 and WebKB). In our second approach, denoted as “MC + gSpan + SVM”, we repeat the same procedure except that we mine frequent subgraphs (gSpan) from the main core (MC) of each graph-of-words and then train an SVM on the resulting features. Main cores can vary from 1-core to 12-core depending on the graph structure, 5-core and 6-core being the most frequent (more than 60%). This yields results similar to the SVM baseline for a faster mining and training compared to gSpan + SVM. Table 1 (upper table) shows the reduction in the dimension of the feature space and we see 1707 Table 2: Test accuracy and macro-average F1-score on four standard datasets. Bold font marks the best performance in a column. * indicates statistical significance at p < 0.05 using micro sign test with regards to the SVM baseline of the same column. MC corresponds to unsupervised feature selection using the main core of each graph-of-words to extract n-gram and subgraph features. gSpan mining support values are 1.6% (WebKB), 7% (R8), 4% (LingSpam) and 0.5% (Amazon). Method Dataset WebKB R8 LingSpam Amazon Accuracy F1-score Accuracy F1-score Accuracy F1-score Accuracy F1-score kNN (k=5) 0.679 0.617 0.894 0.705 0.910 0.774 0.512 0.644 NB (Multinomial) 0.866 0.861 0.934 0.839 0.990 0.971 0.768 0.767 linear SVM 0.889 0.871 0.947 0.858 0.991 0.973 0.792 0.790 gSpan + SVM 0.912* 0.882 0.955* 0.864 0.991 0.972 0.798* 0.795 MC + gSpan + SVM 0.901* 0.871 0.949* 0.858 0.990 0.973 0.800* 0.798 MC + SVM 0.872 0.863 0.937 0.849 0.990 0.972 0.786 0.774 # non-zero n-gram feature values before unsupervised feature selection 0 50 100 150 200 250 # documents 0 1000 2000 3000 4000 5000 # non-zero n-gram feature values after unsupervised feature selection 0 50 100 150 200 250 # documents Figure 2: Distribution of non-zero n-gram feature values before and after unsupervised feature selection (main core retention) on R8 dataset. that on average less than 60% of the subgraphs are kept for little to no cost in prediction effectiveness. In our final approach, denoted as “MC + SVM”, we performed unsupervised feature selection by keeping the terms appearing in the main core (MC) of each document’s graph-of-words representation and then extracted standard n-gram features. Table 1 (lower table) shows the reduction in the dimension of the feature space and we see that on average less than half the n-grams remain. Figure 2 shows the distribution of non-zero features before and after the feature selection on the R8 dataset. Similar changes in distribution can be observed on the other datasets, from a right-tail Gaussian to a power law distribution as expected from the main core retention. Table 2 shows that the main core retention has little to no cost in accuracy and F1score but can reduce drastically the feature space and the number of non-zero values per document. 1 2 3 4 support (%) 0 50k 100k 150k 200k 250k # features 5 6 7 8 9 10 11 12 13 support (%) 1 2 3 4 support (%) 0.85 0.90 0.95 1.00 accuracy 5 6 7 8 9 10 11 12 13 support (%) Figure 3: Number of subgraph features/accuracy in test per support (%) on WebKB (left) and R8 (right) datasets: in black, the selected support value chosen via the elbow method and in red, the accuracy in test for the SVM baseline. 5.5 Unsupervised support selection Figure 3 above illustrates the unsupervised heuristic (elbow method) we used to select the support value, which corresponds to the minimum number of graphs in which a subgraph has to appear to be considered frequent. We noticed that as the support decreases, the number of features increases slightly up until a point where it increases exponentially. This support value, highlighted in black on the figure and chosen before taking into account the class label, is the value we used in our experiments and for which we report the results in Table 1 and 2. The lower plots provide evidence 1708 1-grams 2-grams 3-grams 4-grams 5-grams 6-grams 0 20 40 60 80 100 # features (%) baseline gSpan MC + gSpan Figure 4: Distribution of n-grams (standard and long-distance ones) among all the features on WebKB dataset. that the elbow method helps selecting in an unsupervised manner a support that leads to the best or close to the best accuracy. 5.6 Distribution of mined n-grams In order to gain more insights on why the longdistance n-grams mined with gSpan result in better classification performances than the baseline ngrams, we computed the distribution of the number of unigrams, bigrams, etc. up to 6-grams in the traditional feature set and ours (Figure 4) as well as in the top 5% features that contribute the most to the classification decision of the trained SVM (Figure 5). Again, a long-distance n-gram corresponds to a subgraph of size n in a graph-of-words and can be seen as a relaxed definition of the traditional n-gram, one that takes into account word inversion for instance. To obtain comparable results, we considered for the baseline n-grams with a minimum document frequency equal to the support. Otherwise, by definition, there are at least as many bigrams as there are unigrams and so forth. Figure 4 shows that our approaches mine way more n-grams than unigrams compared to the baseline. This happens because with graph-ofwords a subgraph of size n corresponds to a set of n terms while with bag-of-words a n-gram corresponds to a sequence of n terms. Note that even when restricting the subgraphs to the main cores, there are still more higher order n-grams mined. Figure 5 shows that the higher order n-grams still contribute indeed to the classification decision and in higher proportion than with the baseline, even when restricting to the main cores. For 1-grams 2-grams 3-grams 4-grams 5-grams 6-grams 0 20 40 60 80 100 # features (%) baseline SVM gSpan + SVM MC + gSpan + SVM Figure 5: Distribution of n-grams (standard and long-distance ones) among the top 5% most discriminative features for SVM on WebKB dataset. instance, on the R8 dataset, {bank, base, rate} was a discriminative (top 5% SVM features) longdistance 3-gram for the category “interest” and occurred in documents in the form of “barclays bank cut its base lending rate”, “midland bank matches its base rate” and “base rate of natwest bank dropped”, pattern that would be hard to capture with traditional n-gram bag-of-words. 5.7 Timing With an Intel Core i5-3317U clocking at 2.6GHz and 8GB of RAM, mining the subgraph features with gSpan takes on average 30s for the selected support. It can take several hours with lower support and goes down to 5s using the main cores. 6 Conclusion In this paper, we tackled the task of text categorization by representing documents as graphof-words and then considering the problem as a graph classification one. We were able to extract more discriminative features that correspond to long-distance n-grams through frequent subgraph mining. Experiments on four standard datasets show statistically significant higher accuracy and macro-averaged F1-score compared to baselines. To the best of our knowledge, graph classification has never been tested at that scale – thousands of graphs and tens of thousands of unique node labels – and also in the multiclass scenario. For these reasons, we could not capitalize on all standard methods. In particular, we believe new kernels that support a very high number of unique node labels could yield even better performances. 1709 References Charu C. Aggarwal and ChengXiang Zhai. 2012. A Survey of Text Classification Algorithms. In Mining Text Data, pages 163–222. Ion Androutsopoulos, John Koutsias, Konstantinos V. Chandrinos, George Paliouras, and Constantine D. Spyropoulos. 2000. An Evaluation of Naive Bayesian Anti-Spam Filtering. In Proceedings of the Workshop on Machine Learning in the New Information Age, 11th European Conference on Machine Learning, pages 9–17. Shilpa Arora, Elijah Mayfield, Carolyn Penstein-Rosé, and Eric Nyberg. 2010. Sentiment Classification Using Automatically Extracted Subgraph Features. In Proceedings of the NAACL HLT 2010 Workshop on Computational Approaches to Analysis and Generation of Emotion in Text, CAAGET ’10, pages 131–139. Nikoletta Bassiou and Constantine Kotropoulos. 2010. Word Clustering Using PLSA Enhanced with Long Distance Bigrams. In Proceedings of the 20th International Conference on Pattern Recognition, ICPR ’10, pages 4226–4229. Vladimir Batagelj and Matjaž Zaversnik. 2003. An O(m) Algorithm for Cores Decomposition of Networks. The Computing Research Repository (CoRR), cs.DS/0310049. Roi Blanco and Christina Lioma. 2012. Graph-based term weighting for information retrieval. Information Retrieval, 15(1):54–92. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boomboxes and blenders: Domain adaptation for sentiment classification. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, ACL ’07, pages 440–447. Ana Cardoso-Cachopo. 2007. Improving Methods for Single-label Text Categorization. Ph.D. thesis, Instituto Superior Técnico, Universidade de Lisboa, Lisbon, Portugal. Luigi Pietro Cordella, Pasquale Foggia, Carlo Sansone, and Mario Vento. 2001. An improved algorithm for matching large graphs. In Proceedings of the 3rd IAPR-TC15 Workshop on Graph-based Representations in Pattern Recognition, pages 149–159. Gabor Csardi and Tamas Nepusz. 2006. The igraph software package for complex network research. InterJournal, Complex Systems, 1695(5):1–9. Franca Debole and Fabrizio Sebastiani. 2005. An Analysis of the Relative Hardness of Reuters-21578 Subsets: Research Articles. Journal of the American Society for Information Science and Technology, 56(6):584–596. Mukund Deshpande, Michihiro Kuramochi, Nikil Wale, and George Karypis. 2005. Frequent Substructure-Based Approaches for Classifying Chemical Compounds. IEEE Transactions on Knowledge and Data Engineering, 17(8):1036– 1050. Katja Filippova. 2010. Multi-sentence Compression: Finding Shortest Paths in Word Graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 322– 330. Johannes Fürnkranz. 1998. A study using n-gram features for text categorization. Technical Report OEFAI-TR-98-30, Austrian Research Institute for Artificial Intelligence. Michael R. Garey and David S. Johnson. 1990. Computers and Intractability; A Guide to the Theory of NP-Completeness. W. H. Freeman & Co. Thomas Gärtner, Peter Flach, and Stefan Wrobel. 2003. On graph kernels: Hardness results and efficient alternatives. In Proceedings of the Annual Conference on Computational Learning Theory, COLT ’03, pages 129–143. Samer Hassan, Rada Mihalcea, and Carmen Banea. 2007. Random-Walk Term Weighting for Improved Text Classification. In Proceedings of the International Conference on Semantic Computing, ICSC ’07, pages 242–249. Christoph Helma, Ross D. King, Stefan Kramer, and Ashwin Srinivasan. 2001. The predictive toxicology challenge 2000–2001. Bioinformatics, 17(1):107–108. Jun Huan, Wei Wang, and Jan Prins. 2003. Efficient Mining of Frequent Subgraphs in the Presence of Isomorphism. In Proceedings of the 3rd IEEE International Conference on Data Mining, ICDM ’03, pages 549–552. Chuntao Jiang, Frans Coenen, Robert Sanderson, and Michele Zito. 2010. Text classification using graph mining-based feature extraction. Knowledge-Based Systems, 23(4):302–308. Ning Jin, Calvin Young, and Wei Wang. 2010. GAIA: graph classification using evolutionary computation. In Proceedings of the 2010 ACM SIGMOD international conference on Management of data, SIGMOD ’10, pages 879–890. Thorsten Joachims. 1998. Text categorization with Support Vector Machines: Learning with many relevant features. In Proceedings of the 10th European Conference on Machine Learning, ECML ’98, pages 137–142. Thorsten Joachims. 2006. Training Linear SVMs in Linear Time. In Proceedings of the 12th ACM SIGKDD international conference on Knowledge Discovery and Data mining, KDD ’06, pages 217– 226. 1710 Hisashi Kashima, Koji Tsuda, and Akihiro Inokuchi. 2003. Marginalized kernels between labeled graphs. In Proceedings of the 20th International Conference on Machine Learning, volume 3 of ICML ’03, pages 321–328. Taku Kudo and Yuji Matsumoto. 2004. A Boosting Algorithm for Classification of Semi-Structured Text. In Proceedings of the 9th Conference on Empirical Methods in Natural Language Processing, volume 4 of EMNLP ’04, pages 301–308. Taku Kudo, Eisaku Maeda, and Yuji Matsumoto. 2004. An application of boosting to graph classification. In Advances in Neural Information Processing Systems 17, NIPS ’04, pages 729–736. Leah S. Larkey and W. Bruce Croft. 1996. Combining Classifiers in Text Categorization. In Proceedings of the 19th annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’96, pages 289–297. Pierre Mahé, Nobuhisa Ueda, Tatsuya Akutsu, JeanLuc Perret, and Jean-Philippe Vert. 2004. Extensions of marginalized graph kernels. In Proceedings of the 21st International Conference on Machine Learning, ICML ’04, pages 70–78. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Alex Markov, Mark Last, and Abraham Kandel. 2007. Fast Categorization of Web Documents Represented by Graphs. In Advances in Web Mining and Web Usage Analysis, number 4811 in Lecture Notes in Artificial Intelligence, pages 56–71. Shotaro Matsumoto, Hiroya Takamura, and Manabu Okumura. 2005. Sentiment Classification Using Word Sub-sequences and Dependency Sub-trees. In Proceedings of the 9th Pacific-Asia Conference on Advances in Knowledge Discovery and Data Mining, PAKDD ’05, pages 301–311. Andrew McCallum and Kamal Nigam. 1998. A comparison of event models for Naive Bayes text classification. In Proceedings of the AAAI workshop on learning for text categorization, AAAI ’98, pages 41–48. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing Order into Texts. In Proceedings of the 9th Conference on Empirical Methods in Natural Language Processing, EMNLP ’04, pages 404–411. Siegfried Nijssen and Joost N. Kok. 2004. A Quickstart in Frequent Structure Mining Can Make a Difference. In Proceedings of the 10th ACM SIGKDD international conference on Knowledge Discovery and Data mining, KDD ’04, pages 647–652. Yukio Ohsawa, Nels E. Benson, and Masahiko Yachida. 1998. KeyGraph: Automatic Indexing by Co-occurrence Graph Based on Building Construction Metaphor. In Proceedings of the Advances in Digital Libraries Conference, ADL ’98, pages 12– 18. Arzucan Özgür, Levent Özgür, and Tunga Güngör. 2005. Text Categorization with Class-based and Corpus-based Keyword Selection. In Proceedings of the 20th International Conference on Computer and Information Sciences, ISCIS ’05, pages 606– 615. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, J. Vanderplas, A. Passos, D. Cournapeau, M. Brucher, M. Perrot, and E. Duchesnay. 2011. Scikit-learn: Machine learning in Python. The Journal of Machine Learning Research, 12:2825–2830. François Rousseau and Michalis Vazirgiannis. 2013. Graph-of-word and TW-IDF: New Approach to Ad Hoc IR. In Proceedings of the 22nd ACM international conference on Information and knowledge management, CIKM ’13, pages 59–68. François Rousseau and Michalis Vazirgiannis. 2015. Main Core Retention on Graph-of-words for SingleDocument Keyword Extraction. In Proceedings of the 37th European Conference on Information Retrieval, ECIR ’15, pages 382–393. Hiroto Saigo, Sebastian Nowozin, Tadashi Kadowaki, Taku Kudo, and Koji Tsuda. 2009. gBoost: a mathematical programming approach to graph classification and regression. Machine Learning, 75(1):69– 89. Fabrizio Sebastiani. 2002. Machine Learning in Automated Text Categorization. ACM Computing Surveys, 34(1):1–47. Stephen B. Seidman. 1983. Network structure and minimum degree. Social Networks, 5:269–287. Nino Shervashidze. Visited on 30/05/2015. Graph kernels. http://www.di.ens.fr/~shervashidze/code.html. Marisa Thoma, Hong Cheng, Arthur Gretton, Jiawei Han, Hans-Peter Kriegel, Alexander J. Smola, Le Song, Philip S. Yu, Xifeng Yan, and Karsten M. Borgwardt. 2009. Near-optimal Supervised Feature Selection among Frequent Subgraphs. In Proceedings of the SIAM International Conference on Data Mining, SDM ’09, pages 1076–1087. Robert Thorndike. 1953. Who belongs in the family? Psychometrika, 18(4):267–276. S. V. N. Vishwanathan, Nicol N. Schraudolph, Risi Kondor, and Karsten M. Borgwardt. 2010. Graph kernels. Journal of Machine Learning Research, 11:1201–1242. 1711 Xifeng Yan and Jiawei Han. 2002. gspan: Graphbased substructure pattern mining. In Proceedings of the 2nd IEEE International Conference on Data Mining, ICDM ’02, pages 721–724. Yiming Yang and Xin Liu. 1999. A Re-examination of Text Categorization Methods. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’99, pages 42–49. Yiming Yang and J. O. Pedersen. 1997. A Comparative Study on Feature Selection in Text Categorization. In Proceedings of the 14th International Conference on Machine Learning, ICML ’97, pages 412–420. 1712
2015
164
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1713–1722, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Inverted indexing for cross-lingual NLP Anders Søgaard∗ ˇZeljko Agi´c∗ H´ector Mart´ınez Alonso∗ Barbara Plank∗ Bernd Bohnet† Anders Johannsen∗ ∗Center for Language Technology, University of Copenhagen, Denmark †Google, London, United Kingdom [email protected] Abstract We present a novel, count-based approach to obtaining inter-lingual word representations based on inverted indexing of Wikipedia. We present experiments applying these representations to 17 datasets in document classification, POS tagging, dependency parsing, and word alignment. Our approach has the advantage that it is simple, computationally efficient and almost parameter-free, and, more importantly, it enables multi-source crosslingual learning. In 14/17 cases, we improve over using state-of-the-art bilingual embeddings. 1 Introduction Linguistic resources are hard to come by and unevenly distributed across the world’s languages. Consequently, transferring linguistic resources or knowledge from one language to another has been identified as an important research problem. Most work on cross-lingual transfer has used English as the source language. There are two reasons for this; namely, the availability of English resources and the availability of parallel data for (and translations between) English and most other languages. In cross-lingual syntactic parsing, for example, two approaches to cross-lingual learning have been explored, namely annotation projection and delexicalized transfer. Annotation projection (Hwa et al., 2005) uses word-alignments in human translations to project predicted sourceside analyses to the target language, producing a noisy syntactically annotated resource for the target language. On the other hand, delexicalized transfer (Zeman and Resnik, 2008; McDonald et al., 2011; Søgaard, 2011) simply removes lexical features from mono-lingual parsing models, but assumes reliable POS tagging for the target language. Delexicalized transfer works particularly well when resources from several source languages are used for training; learning from multiple other languages prevents over-fitting to the peculiarities of the source language. Some authors have also combined annotation projection and delexicalized transfer, e.g., McDonald et al. (2011). Others have tried to augment delexicalized transfer models with bilingual word representations (T¨ackstr¨om et al., 2013; Xiao and Guo, 2014). In cross-lingual POS tagging, mostly annotation projection has been explored (Fossum and Abney, 2005; Das and Petrov, 2011), since all features in POS tagging models are typically lexical. However, using bilingual word representations was recently explored as an alternative to projectionbased approaches (Gouws and Søgaard, 2015). The major drawback of using bi-lexical representations is that it limits us to using a single source language. T¨ackstr¨om et al. (2013) obtained significant improvements using bilingual word clusters over a single source delexicalized transfer model, for example, but even better results were obtained with delexicalized transfer in McDonald et al. (2011) by simply using several source languages. This paper introduces a simple method for obtaining truly inter-lingual word representations in order to train models with lexical features on several source languages at the same time. Briefly put, we represent words by their occurrence in clusters of Wikipedia articles linking to the same concept. Our representations are competitive with 1713 state-of-the-art neural net word embeddings when using only a single source language, but also enable us to exploit the availability of resources in multiple languages. This also makes it possible to explore multi-source transfer for POS tagging. We evaluate the method across POS tagging and dependency parsing datasets in four languages in the Google Universal Treebanks v. 1.0 (see §3.2.1), as well as two document classification datasets and four word alignment problems using a handaligned text. Finally, we also directly compare our results to Xiao and Guo (2014) on parsing data for four languages from CoNLL 2006 and 2007. Contribution • We present a novel approach to cross-lingual word representations with several advantages over existing methods: (a) It does not require training neural networks, (b) it does not rely on the availability of parallel data between source and target language, and (c) it enables multi-source transfer with lexical representations. • We present an evaluation of our inter-lingual word representations, based on inverted indexing, across four tasks: document classification, POS tagging, dependency parsing, and word alignment, comparing our representations to two state-of-the-art neural net word embeddings. For the 17 datasets, for which we can make this comparison, our system is better than these embedding models on 14 datasets. The word representations are made publicly available at https:// bitbucket.org/lowlands/ 2 Distributional word representations Most NLP models rely on lexical features. Encoding the presence of words leads to highdimensional and sparse models. Also, simple bagof-words models fail to capture the relatedness of words. In many tasks, synonymous words should be treated alike, but their bag-of-words representations are as different as those of dog and therefore. Distributional word representations are supposed to capture distributional similarities between words. Intuitively, we want similar words to have similar representations. Known approaches focus on different kinds of similarity, some more syntactic, some more semantic. The representations are typically either clusters of distributionally similar words, e.g., Brown et al. (1992), or vector representations. In this paper, we focus on vector representations. In vector-based approaches, similar representations are vectors close in some multi-dimensional space. 2.1 Count-based and prediction-based representations There are, briefly put, two approaches to inducing vector-based distributional word representations from large corpora: count-based and predictionbased approaches (Baroni et al., 2014). Countbased approaches represent words by their cooccurrences. Dimensionality reduction is typically performed on a raw or weighted co-occurrence matrix using methods such as singular value decomposition (SVD), a method for maximizing the variance in a dataset in few dimensions. In our inverted indexing, we use raw co-occurrence data. Prediction-based methods use discriminative learning techniques to learn how to predict words from their context, or vice versa. They rely on a neural network architecture, and once the network converges, they use word representations from a middle layer as their distributional representations. Since the network learns to predict contexts from this representation, words occurring in the same contexts will get similar representations. In §2.1.2, we briefly introduce the skipgram and CBOW models (Mikolov et al., 2013; Collobert and Weston, 2008). Baroni et al. (2014) argue in favor of predictionbased representations, but provide little explanation why prediction-based representations should be better. One key finding, however, is that prediction-based methods tend to be more robust than count-based methods, and one reason for this seems to be better regularization. 2.1.1 Monolingual representations Count-based representations rely on cooccurrence information in the form of binary matrices, raw counts, or point-wise mutual information (PMI). The PMI between two words is P(wi; wj) = log P(wi | wj) P(wi) and PMI representations associate a word wi with a vector of its PMIs with all other words wj. Dimensionality reduction is typically performed using SVD. We will refer to two prediction-based approaches to learning word vectors, below: the 1714 KLEMENTIEV CHANDAR INVERTED es coche (’car’, NOUN) approximately beyond upgrading car bicycle cars driving car cars expressed (’expressed’, VERB) 1.61 55.8 month-to-month reiterates reiterating confirming exists defining example tel´efono (’phone’, NOUN) alexandra davison creditor phone telephone e-mail phones phone telecommunication ´arbol (’tree’, NOUN) tree market-oriented assassinate tree bread wooden tree trees grows escribi´o (’wrote’, VERB) wrote alleges testified wrote paul palace wrote inspired inspiration amarillo (’yellow’, ADJ) yellow louisiana 1911 crane grabs outfit colors yellow oohs de auto (’car’, NOUN) car cars camaro ausgedr¨uckt (’expressed’, VERB) adjective decimal imperative fr voiture (’car’, NOUN) mercedes-benz cars quickest exprim´e (’expressed’, VERB) simultaneously instead possible t´el´ephone (’phone’, NOUN) phone create allowing arbre (’tree’, NOUN) tree trees grows ´ecrit (’wrote’, VERB) published writers books jaune (’yellow’, ADJ) classification yellow stages sv bil (’car’, NOUN) cars car automobiles uttryckte (’expressed’, VERB) rejected threatening unacceptable telefon (’phone’, NOUN) telephones telephone share tr¨ad (’tree’, NOUN) trees tree trunks skrev (’wrote’, VERB) death wrote biography gul (’yellow’, ADJ) greenish bluish colored Table 1: Three nearest neighbors in the English training data of six words occurring in the Spanish test data, in the embeddings used in our experiments. Only 2/6 words were in the German data. skip-gram model and CBOW. The two models both rely on three level architectures with input, output and a middle layer for intermediate target word representations. The major difference is that skip-gram uses the target word as input and the context as output, whereas the CBOW model does it the other way around. Learning goes by back-propagation, and random target words are used as negative examples. Levy and Goldberg (2014) show that prediction-based representations obtained with the skip-gram model can be related to count-based ones obtained with PMI. They argue that which is best, varies across tasks. 2.1.2 Bilingual representations Klementiev et al. (2012) learn distinct embedding models for the source and target languages, but while learning to minimize the sum of the two models’ losses, they jointly learn a regularizing interaction matrix, enforcing word pairs aligned in parallel text to have similar representations. Note that Klementiev et al. (2012) rely on word-aligned parallel text, and thereby on a large-coverage soft mapping of source words to target words. Other approaches rely on small coverage dictionaries with hard 1:1 mappings between words. Klementiev et al. (2012) do not use skip-gram or CBOW, but the language model presented in Bengio et al. (2003). Chandar et al. (2014) also rely on sentencealigned parallel text, but do not make use of word alignments. They begin with bag-of-words representations of source and target sentences. They then use an auto-encoder architecture. Autoencoders for document classification typically try to reconstruct bag-of-words input vectors at the output layer, using back-propagation, passing the representation through a smaller middle layer. This layer then provides a dimensionality reduction. Chandar et al. (2014) instead replace the output layer with the target language bag-of-words reconstruction. In their final set-up, they simultaneously minimize the loss of a source-source, a target-target, a source-target, and a target-source auto-encoder, which corresponds to training a single auto-encoder with randomly chosen instances from source-target pairs. The bilingual word vectors can now be read off the auto-encoder’s middle layer. Xiao and Guo (2014) use a CBOW model and random target words as negative examples. The trick they introduce to learn bilingual embeddings, relies on a bilingual dictionary, in their case obtained from Wiktionary. They only use the unambiguous translation pairs for the source and target languages in question and simply force translation equivalents to have the same representation. This corresponds to replacing words from unambigu1715 ous translation pairs with a unique dummy symbol. Gouws and Søgaard (2015) present a much simpler approach to learning prediction-based bilingual representations. They assume a list of sourcetarget pivot word pairs that should obtain similar representations, i.e., translations or words with similar representations in some knowledge base. They then present a generative model for constructing a mixed language corpus by randomly selecting sentences from source and target corpora, and randomly replacing pivot words with their equivalent in the other language. They show that running the CBOW model on such a mixed corpus suffices to learn competitive bilingual embeddings. Like Xiao and Guo (2014), Gouws and Søgaard (2015) only use unambiguous translation pairs. There has, to the best of our knowledge, been no previous work on count-based approaches to bilingual representations. 2.2 Inverted indexing In this paper, we introduce a new count-based approach, INVERTED, to obtaining cross-lingual word representations using inverted indexing, comparing it with bilingual word representations learned using discriminative techniques. The main advantage of this approach, apart for its simplicity, is that it provides truly inter-lingual representations. Our idea is simple. Wikipedia is a cross-lingual, crowd-sourced encyclopedia with more than 35 million articles written in different languages. At the time of writing, Wikipedia contains more than 10,000 articles in 129 languages. 52 languages had more than 100,000 articles. Several articles are written on the same topic, but in different languages, and these articles all link to the same node in the Wikipedia ontology, the same Wikipedia concept. If for a set of languages, we identify the common subset of Wikipedia concepts, we can thus describe each concept by the set of terms used in the corresponding articles. Each term set will include terms from each of the different languages. We can now present a word by the corresponding row in the inverted indexing of this concept-to-term set matrix. Instead of representing a Wikipedia concept by the terms used across languages to describe it, we describe a word by the Wikipedia concepts it is used to describe. Note that because of the cross-lingual concepts, this vector representation is by definition cross-lingual. So, for example, if the word glasses is used in the English Wikipedia article on Harry Potter, and the English Wikipedia article on Google, and the word Brille occurs in the corresponding German ones, the two words are likely to get similar representations. In our experiments, we use the common subset of available German, English, French, Spanish, and Swedish Wikipedia dumps.1 We leave out words occurring in more than 5000 documents and perform dimensionality reduction using stochastic, two-pass, rank-reduced SVD - specifically, the latent semantic indexing implementation in Gensim using default parameters.2 2.3 Baseline embeddings We use the word embedding models of Klementiev et al. (2012)3 (KLEMENTIEV), and Chandar et al. (2014) (CHANDAR) as baselines in the experiments below. We also ran some of our experiments with the embeddings provided by Gouws and Søgaard (2015), but results were very similar to Chandar et al. (2014). We compare the nearest cross-language neighbors in the various representations in Table 1. Specifically, we selected five words from the Spanish test data and searched for its three nearest neighbors in KLEMENTIEV, CHANDAR and INVERTED. The nearest neighbors are presented left to right. We note that CHANDAR and INVERTED seem to contain less noise. KLEMENTIEV is the only model that relies on wordalignments. Whether the noise originates from alignments, or just model differences, is unclear to us. 2.4 Parameters of the word representation models For KLEMENTIEV and CHANDAR, we rely on embeddings provided by the authors. The only parameter in inverted indexing is the fixed dimensionality in SVD. Our baseline models use 40 dimensions. In document classification, we also use 40 dimensions, but for POS tagging and dependency parsing, we tune the dimensionality parameter δ ∈{40, 80, 160} on Spanish development data when possible. For document clas1https://sites.google.com/site/rmyeid/ projects/polyglot 2http://radimrehurek.com/gensim/ 3http://klementiev.org/data/distrib/ 1716 TRAIN TEST TOKEN COVERAGE lang data points tokens data points tokens KLEMENTIEV CHANDAR INVERTED RCV – DOCUMENT CLASSIFICATION en 10000 – – – 0.314 0.314 0.779 de – – 4998 – 0.132 0.132 0.347 AMAZON – DOCUMENT CLASSIFICATION en 6000 – – – 0.314 0.314 0.779 de – – 6000 – 0.132 0.132 0.347 GOOGLE UNIVERSAL TREEBANKS – POS TAGGING & DEPENDENCY PARSING en 39.8k 950k 2.4k 56.7k – – – de 2.2k 30.4k 1.0k 16.3k 0.886 0.884 0.587 es 3.3k 94k 0.3k 8.3k 0.916 0.916 0.528 fr 3.3k 74.9k 0.3k 6.9k 0.888 0.888 0.540 sv 4.4k 66.6k 1.2k 20.3k n/a n/a 0.679 CONLL 07 – DEPENDENCY PARSING en 18.6 447k – – – – – es – – 206 5.7k 0.841 0.841 0.455 de – – 357 5.7k 0.616 0.612 0.294 sv – – 389 5.7k n/a n/a 0.561 EUROPARL – WORD ALIGNMENT en – – 100 – 0.370 0.370 0.370 es – – 100 – 0.533 0.533 0.533 Table 2: Characteristics of the data sets. Embeddings coverage (token-level) for KLEMENTIEV, CHANDAR and INVERTED on the test sets. We use the common vocabulary on WORD ALIGNMENT. sification and word alignment, we fix the number of dimensions to 40. For both our baselines and systems, we also tune a scaling factor σ ∈{1.0, 0.1, 0.01, 0.001} for POS tagging and dependency parsing, using the scaling method from Turian et al. (2010), also used in Gouws and Søgaard (2015). We do not scale our embeddings for document classification or word alignment. 3 Experiments The data set characteristics are found in Table 2.3. 3.1 Document classification Data Our first document classification task is topic classification on the cross-lingual multi-domain sentiment analysis dataset AMAZON in Prettenhofer and Stein (2010).4 We represent each document by the average of the representations of those words that we find both in the documents and in our embeddings. Rather than classifying reviews by sentiment, we classify by topic, trying to discriminate between book reviews, music reviews and DVD reviews, as a three-way classification problem, training on English and testing on German. Unlike in the other tasks below, we always 4http://www.webis.de/research/corpora/ use unscaled word representations, since these are our only features. All word representations have 40 dimensions. The other document classification task is a fourway classification problem distinguishing between four topics in RCV corpus.5 See Klementiev et al. (2012) for details. We use exactly the same set-up as for AMAZON. Baselines We use the default parameters of the implementation of logistic regression in Sklearn as our baseline.6 The feature representation is the average embedding of non-stopwords in KLEMENTIEV, resp., CHANDAR. Out-of-vocabulary words do not affect the feature representation of the documents. System For our system, we replace the above neural net word embeddings with INVERTED representations. Again, out-of-vocabulary words do not affect the feature representation of the documents. 3.2 POS tagging Data We use the coarse-grained part-of-speech annotations in the Google Universal Treebanks v. 1.0 5http://www.ml4nlp.de/code-and-data 6http://scikit-learn.org/stable/ 1717 (McDonald et al., 2013).7 Out of the languages in this set of treebanks, we focus on five languages (de, en, es, fr, sv), with English only used as training data. Those are all treebanks of significant size, but more importantly, we have baseline embeddings for four of these languages, as well as tag dictionaries (Li et al., 2012) needed for the POS tagging experiments. Baselines One baseline method is a typeconstrained structured perceptron with only ortographic features, which are expected to transfer across languages. The type constraints come from Wiktionary, a crowd-sourced tag dictionary.8 Type constraints from Wiktionary were first used by Li et al. (2012), but note that their set-up is unsupervised learning. T¨ackstr¨om et al. (2013) also used type constraints in a supervised set-up. Our learning algorithm is the structured perceptron algorithm originally proposed by Collins (2002). In our POS tagging experiments, we always do 10 passes over the data. We also present two other baselines, where we augment the feature representation with different embeddings for the target word, KLEMENTIEV and CHANDAR. With all the embeddings in POS tagging, we assign a mean vector to out-of-vocabulary words. System For our system, we simply augment the delexicalized POS tagger with the INVERTED distributional representation of the current word. The best parameter setting on Spanish development data was σ = 0.01, δ = 160. 3.3 Dependency parsing Data We use the same treebanks from the Google Universal Treebanks v. 1.0 as used in our POS tagging experiments. We again use the Spanish development data for parameter tuning. For compatibility with Xiao and Guo (2014), we also present results on CoNLL 2006 and 2007 treebanks for languages for which we had baseline and system word representations (de, es, sv). Our parameter settings for these experiments were the same as those tuned on the Spanish development data from the Google Universal Treebanks v. 1.0. Baselines The most obvious baseline in our experiments is delexicalized transfer (DELEX) (McDonald et al., 2011; Søgaard, 2011). This baseline system simply learns models without lexical features. We use a modified version of the first-order Mate 7http://code.google.com/p/uni-dep-tb/ 8https://code.google.com/p/ wikily-supervised-pos-tagger/ parser (Bohnet, 2010) that also takes continuousvalued embeddings as input an disregards features that include lexical items. For our embeddings baselines, we augment the feature space by adding embedding vectors for head h and dependent d. We experimented with different versions of combining embedding vectors, from firing separate h and d per-dimension features (Bansal et al., 2014) to combining their information. We found that combining the embeddings of h and d is effective and consistently use the absolute difference between the embedding vectors, since that worked better than addition and multiplication on development data. Delexicalized transfer (DELEX) uses three (3) iterations over the data in both the single-source and the multi-source set-up, a parameter set on the Spanish development data. The remaining parameters were obtained by averaging over performance with different embeddings on the Spanish development data, obtaining: σ = 0.005, δ = 20, i = 3, and absolute difference for vector combination. With all the embeddings in dependency parsing, we assign a POS-specific mean vector to out-of-vocabulary words, i.e., the mean of vectors for words with the input word’s POS. System We use the same parameters as those used for our baseline systems. In the single-source setup, we use absolute difference for combining vectors, while addition in the multi-source set-up. 3.4 Word alignment Data We use the manually word-aligned EnglishSpanish Europarl data from Graca et al. (2008). The dataset contains 100 sentences. The annotators annotated whether word alignments were certain or possible, and we present results with all word alignments and with only the certain ones. See Graca et al. (2008) for details. Baselines For word alignment, we simply align every aligned word in the gold data, for which we have a word embedding, to its (Euclidean) nearest neighbor in the target sentence. We evaluate this strategy by its precision (P@1). System We compare INVERTED with KLEMENTIEV and CHANDAR. To ensure a fair comparison, we use the subset of words covered by all three embeddings. 1718 de es fr sv av-sv EN→TARGET EMBEDS K12 80.20 73.16 47.69 67.02 C14 74.85 83.03 48.24 68.71 INVERTED SVD 81.18 82.12 49.68 78.72 70.99 MULTI-SOURCE→TARGET INVERTED SVD 80.10 84.69 49.68 78.72 70.66 Table 4: POS tagging (accuracies), K12: KLEMENTIEV, C14: CHANDAR. Parameters tuned on development data: σ = 0.01, δ = 160. Iterations not tuned (i = 10). Averages do not include Swedish, for comparability. Dataset KLEMENTIEV CHANDAR INVERTED AMAZON 0.32 0.36 0.49 RCV 0.75 0.90 0.55 Table 3: Document classification results (F1scores) UAS de es sv EN→TARGET DELEX 44.78 47.07 56.75 DELEX-XIAO 46.24 52.05 57.79 EMBEDS K12 44.77 47.31 C14 44.32 47.56 INVERTED 45.01 47.45 56.15 XIAO 49.54 55.72 61.88 Table 6: Dependency parsing for CoNLL 2006/2007 datasets. Parameters same as on the Google Universal Treebanks. 4 Results 4.1 Document classification Our document classification results in Table 3 are mixed, but we note that both Klementiev et al. (2012) and Chandar et al. (2014) developed their methods using development data from the RCV corpus. It is therefore not surprising that they obtain good results on this data. On AMAZON, INVERTED is superior to both KLEMENTIEV and CHANDAR. 4.2 POS tagging In POS tagging, INVERTED leads to significant improvements over using KLEMENTIEV and CHANDAR. See Table 4 for results. Somewhat surprisingly, we see no general gain from using multiple source languages. This is very different from what has been observed in dependency parsing (McDonald et al., 2011), but may be explained by treebank sizes, language similarity, or the noise introduced by the word representations. 4.3 Dependency parsing In dependency parsing, distributional word representations do not lead to significant improvements, but while KLEMENTIEV and CHANDAR hurt performance, the INVERTED representations lead to small improvements on some languages. The fact that improvements are primarily seen on Spanish suggest that our approach is parametersensitive. This is in line with previous observations that count-based methods are more parameter-sensitive than prediction-based ones (Baroni et al., 2014). For comparability with Xiao and Guo (2014), we also did experiments with the CoNLL 2006 and CoNLL 2007 datasets for which we had embeddings (Table 6). Again, we see little effects from using the word representations, and we also see that our baseline model is weaker than the one in Xiao and Guo (2014) (DELEX-XIAO). See §5 for further discussion. 4.4 Word alignment The word alignment results are presented in Table 7. On the certain alignments, we see an accuracy of more than 50% with INVERTED in one case. KLEMENTIEV and CHANDAR have the advantage of having been trained on the EnglishSpanish Europarl data, but nevertheless we see consistent improvements with INVERTED over their off-the-shelf embeddings. 1719 UAS LAS de es fr sv de es fr sv EN→TARGET DELEX 56.26 62.11 64.30 66.61 48.24 53.01 54.98 56.93 EMBEDS K12 56.47 61.92 61.51 48.26 52.88 51.76 C14 56.19 61.97 62.95 48.11 52.97 53.90 INVERTED 56.18 61.71 63.81 66.54 48.82 53.04 54.81 57.18 MULTI-SOURCE→TARGET DELEX 56.80 63.21 66.00 67.49 49.32 54.77 56.53 57.86 INVERTED 56.56 64.03 66.22 67.32 48.82 55.03 56.79 57.70 Table 5: Dependency parsing results on the Universal Treebanks (unlabeled and labeled attachment scores). Parameters tuned on development data: σ = 0.005, δ = 20, i = 3. KLEMENTIEV CHANDAR INVERTED EN-ES (S+P) 0.20 0.24 0.25 ES-EN (S+P) 0.35 0.32 0.41 EN-ES (S) 0.20 0.25 0.25 ES-EN (S) 0.38 0.39 0.53 Table 7: Word alignment results (P@1). S=sure (certain) alignments. P=possible alignments. 5 Related Work As noted in §1, there has been some work on learning word representations for cross-lingual parsing lately. T¨ackstr¨om et al. (2013) presented a bilingual clustering algorithm and used the word clusters to augment a delexicalized transfer baseline. Bansal et al. (2014), in the context of monolingual dependency parsing, investigate continuous word representation for dependency parsing in a monolingual cross-domain setup and compare them to word clusters. However, to make the embeddings work, they had to i) bucket real values and perform hierarchical clustering on them, ending up with word clusters very similar to those of T¨ackstr¨om et al. (2013); ii) use syntactic context to estimate embeddings. In the cross-lingual setting, syntactic context is not available for the target language, but doing clustering on top of inverted indexing is an interesting option we did not explore in this paper. Xiao and Guo (2014) is, to the best of our knowledge, the only parser using bilingual embeddings for unsupervised cross-lingual parsing. They evaluate their models on CoNLL 2006 and CoNLL 2007, and we compare our results to theirs in §4. They obtain much better relative improvements on dependency parsing that we do - comparable to those we observe in document classification and POS tagging. It is not clear to us what is the explanation for this improvement. The approach relies on a bilingual dictionary as in Klementiev et al. (2012) and Gouws and Søgaard (2015), but none of these embeddings led to improvements. Unfortunately, we did not have the code or embeddings of Xiao and Guo (2014). One possible explanation is that they use the embeddings in a very different way in the parser. They use the MSTParser. Unfortunately, they do not say exactly how they combine the embeddings with their baseline feature model. The idea of using inverted indexing in Wikipedia for modelling language is not entirely new either. In cross-lingual information retrieval, this technique, sometimes referred to as explicit semantic analysis, has been used to measure source and target language document relatedness (Potthast et al., 2008; Sorg and Cimiano, 2008). Gabrilovich and Markovitch (2009) also use this technique to model documents, and they evaluate their method on text categorization and on computing the degree of semantic relatedness between text fragments. See also M¨uller and Gurevych (2009) for an application of explicit semantic analysis to modelling documents. This line of work is very different from ours, and to the best of our knowledge, we are the first to propose to use inverted indexing of Wikipedia for cross-lingual word representations. 1720 6 Conclusions We presented a simple, scalable approach to obtaining cross-lingual word representations that enables multi-source learning. We compared these representations to two state-of-the-art approaches to neural net word embeddings across four tasks and 17 datasets, obtaining better results than both approaches in 14/17 of these cases. References Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In ACL. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In ACL. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In COLING. Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. Sarath Chandar, Stanislas Lauly, Hugo Larochelle, Mitesh Khapra, Balaraman Ravindran, Vikas C Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In NIPS. Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In EMNLP. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In ICML. Dipanjan Das and Slav Petrov. 2011. Unsupervised part-of-speech tagging with bilingual graph-based projections. In ACL. Victoria Fossum and Steven Abney. 2005. Automatically inducing a part-of-speech tagger by projecting from multiple source languages across aligned corpora. In IJCNLP. Evgeniy Gabrilovich and Shaul Markovitch. 2009. Wikipedia-based semantic interpretation for natural language processing. Journal of Artificial Intelligence Research, pages 443–498. Stephan Gouws and Anders Søgaard. 2015. Simple task-specific bilingual word embeddings. In NAACL. Joao Graca, Joana Pardal, Lu´ısa Coheur, and Diamantino Caseiro. 2008. Building a golden collection of parallel multi-language word alignments. In LREC. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3):311–325. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In COLING. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In ACL. Shen Li, Jo˜ao Grac¸a, and Ben Taskar. 2012. Wiki-ly supervised part-of-speech tagging. In EMNLP. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multi-source transfer of delexicalized dependency parsers. In EMNLP. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In ACL. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory Corrado, and Jeffrey Dean. 2013. Distributed representations of words and phrases and their compositionality. In NIPS. Christof M¨uller and Iryna Gurevych. 2009. A study on the semantic relatedness of query and document terms in information retrieval. In EMNLP. Martin Potthast, Benno Stein, and Maik Anderka. 2008. A wikipedia-based multilingual retrieval model. In Advances in Information Retrieval. Peter Prettenhofer and Benno Stein. 2010. Crosslanguage text classification using structural correspondence learning. In ACL. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In Proceedings of ACL. Philipp Sorg and Philipp Cimiano. 2008. Crosslingual information retrieval with explicit semantic analysis. In Working Notes for the CLEF 2008 Workshop. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. TACL, 1:1–12. 1721 Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In ACL. Min Xiao and Yuhong Guo. 2014. Distributed word representation learning for cross-lingual dependency parsing. In CoNLL. Daniel Zeman and Philip Resnik. 2008. Crosslanguage parser adaptation between related languages. In IJCNLP. 1722
2015
165
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1723–1732, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Multi-Task Learning for Multiple Language Translation Daxiang Dong, Hua Wu, Wei He, Dianhai Yu and Haifeng Wang Baidu Inc, Beijing, China {dongdaxiang, wu hua, hewei06, yudianhai, wanghaifeng}@baidu.com Abstract In this paper, we investigate the problem of learning a machine translation model that can simultaneously translate sentences from one source language to multiple target languages. Our solution is inspired by the recently proposed neural machine translation model which generalizes machine translation as a sequence learning problem. We extend the neural machine translation to a multi-task learning framework which shares source language representation and separates the modeling of different target language translation. Our framework can be applied to situations where either large amounts of parallel data or limited parallel data is available. Experiments show that our multi-task learning model is able to achieve significantly higher translation quality over individually learned model in both situations on the data sets publicly available. 1 Introduction Translation from one source language to multiple target languages at the same time is a difficult task for humans. A person often needs to be familiar with specific translation rules for different language pairs. Machine translation systems suffer from the same problems too. Under the current classic statistical machine translation framework, it is hard to share information across different phrase tables among different language pairs. Translation quality decreases rapidly when the size of training corpus for some minority language pairs becomes smaller. To conquer the problems described above, we propose a multi-task learning framework based on a sequence learning model to conduct machine translation from one source language to multiple target languages, inspired by the recently proposed neural machine translation(NMT) framework proposed by Bahdanau et al. (2014). Specifically, we extend the recurrent neural network based encoder-decoder framework to a multi-task learning model that shares an encoder across all language pairs and utilize a different decoder for each target language. The neural machine translation approach has recently achieved promising results in improving translation quality. Different from conventional statistical machine translation approaches, neural machine translation approaches aim at learning a radically end-to-end neural network model to optimize translation performance by generalizing machine translation as a sequence learning problem. Based on the neural translation framework, the lexical sparsity problem and the long-range dependency problem in traditional statistical machine translation can be alleviated through neural networks such as long shortterm memory networks which provide great lexical generalization and long-term sequence memorization abilities. The basic assumption of our proposed framework is that many languages differ lexically but are closely related on the semantic and/or the syntactic levels. We explore such correlation across different target languages and realize it under a multi-task learning framework. We treat a separate translation direction as a sub RNN encode-decoder task in this framework which shares the same encoder (i.e. the same source language representation) across different translation directions, and use a different decoder for each specific target language. In this way, this proposed multi-task learning model can make full use of the source language corpora across different language pairs. Since the encoder part shares the same source language representation 1723 across all the translation tasks, it may learn semantic and structured predictive representations that can not be learned with only a small amount of data. Moreover, during training we jointly model the alignment and the translation process simultaneously for different language pairs under the same framework. For example, when we simultaneously translate from English into Korean and Japanese, we can jointly learn latent similar semantic and structure information across Korea and Japanese because these two languages share some common language structures. The contribution of this work is three folds. First, we propose a unified machine learning framework to explore the problem of translating one source language into multiple target languages. To the best of our knowledge, this problem has not been studied carefully in the statistical machine translation field before. Second, given large-scale training corpora for different language pairs, we show that our framework can improve translation quality on each target language as compared with the neural translation model trained on a single language pair. Finally, our framework is able to alleviate the data scarcity problem, using language pairs with large-scale parallel training corpora to improve the translation quality of those with few parallel training corpus. The following sections will be organized as follows: in section 2, related work will be described, and in section 3, we will describe our multi-task learning method. Experiments that demonstrate the effectiveness of our framework will be described in section 4. Lastly, we will conclude our work in section 5. 2 Related Work Statistical machine translation systems often rely on large-scale parallel and monolingual training corpora to generate translations of high quality. Unfortunately, statistical machine translation system often suffers from data sparsity problem due to the fact that phrase tables are extracted from the limited bilingual corpus. Much work has been done to address the data sparsity problem such as the pivot language approach (Wu and Wang, 2007; Cohn and Lapata, 2007) and deep learning techniques (Devlin et al., 2014; Gao et al., 2014; Sundermeyer et al., 2014; Liu et al., 2014). On the problem of how to translate one source language to many target languages within one model, few work has been done in statistical machine translation. A related work in SMT is the pivot language approach for statistical machine translation which uses a commonly used language as a ”bridge” to generate source-target translation for language pair with few training corpus. Pivot based statistical machine translation is crucial in machine translation for resource-poor language pairs, such as Spanish to Chinese. Considering the problem of translating one source language to many target languages, pivot based SMT approaches does work well given a large-scale source language to pivot language bilingual corpus and large-scale pivot language to target languages corpus. However, in reality, language pairs between English and many other target languages may not be large enough, and pivot-based SMT sometimes fails to handle this problem. Our approach handles one to many target language translation in a different way that we directly learn an end to multi-end translation system that does not need a pivot language based on the idea of neural machine translation. Neural Machine translation is a emerging new field in machine translation, proposed by several work recently (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Bahdanau et al., 2014), aiming at end-to-end machine translation without phrase table extraction and language model training. Different from traditional statistical machine translation, neural machine translation encodes a variable-length source sentence with a recurrent neural network into a fixed-length vector representation and decodes it with another recurrent neural network from a fixed-length vector into variable-length target sentence. A typical model is the RNN encoder-decoder approach proposed by Bahdanau et al. (2014), which utilizes a bidirectional recurrent neural network to compress the source sentence information and fits the conditional probability of words in target languages with a recurrent manner. Moreover, soft alignment parameters are considered in this model. As a specific example model in this paper, we adopt a RNN encoder-decoder neural machine translation model for multi-task learning, though all neural network based model can be adapted in our framework. In the natural language processing field, a 1724 notable work related with multi-task learning was proposed by Collobert et al. (2011) which shared common representation for input words and solve different traditional NLP tasks such as part-of-Speech tagging, name entity recognition and semantic role labeling within one framework, where the convolutional neural network model was used. Hatori et al. (2012) proposed to jointly train word segmentation, POS tagging and dependency parsing, which can also be seen as a multi-task learning approach. Similar idea has also been proposed by Li et al. (2014) in Chinese dependency parsing. Most of multi-task learning or joint training frameworks can be summarized as parameter sharing approaches proposed by Ando and Zhang (2005) where they jointly trained models and shared center parameters in NLP tasks. Researchers have also explored similar approaches (Sennrich et al., 2013; Cui et al., 2013) in statistical machine translation which are often refered as domain adaption. Our work explores the possibility of machine translation under the multitask framework by using the recurrent neural networks. To the best of our knowledge, this is the first trial of end to end machine translation under multi-task learning framework. 3 Multi-task Model for Multiple Language Translation Our model is a general framework for translating from one source language to many targets. The model we build in this section is a recurrent neural network based encoder-decoder model with multiple target tasks, and each task is a specific translation direction. Different tasks share the same translation encoder across different language pairs. We will describe model details in this section. 3.1 Objective Function Given a pair of training sentence {x, y}, a standard recurrent neural network based encoder-decoder machine translation model fits a parameterized model to maximize the conditional probability of a target sentence y given a source sentence x , i.e., argmax p(y|x). We extend this into multiple languages setting. In particular, suppose we want to translate from English to many different languages, for instance, French(Fr), Dutch(Nl), Spanish(Es). Parallel training data will be collected before training, i.e. En-Fr, En-Nl, En-Es parallel sentences. Since the English representation of the three language pairs is shared in one encoder, the objective function we optimize is the summation of several conditional probability terms conditioned on representation generated from the same encoder. L(Θ) = argmax Θ ( X Tp ( 1 Np Np X i log p(yiTp|xiTp; Θ)) (1) where Θ = {Θsrc, ΘtrgTp, Tp = 1, 2, · · · , Tm}, Θsrc is a collection of parameters for source encoder. And ΘtrgTp is the parameter set of the Tpth target language. Np is the size of parallel training corpus of the pth language pair. For different target languages, the target encoder parameters are seperated so we have Tm decoders to optimize. This parameter sharing strategy makes different language pairs maintain the same semantic and structure information of the source language and learn to translate into target languages in different decoders. 3.2 Model Details Suppose we have several language pairs (xTp, yTp) where Tp denotes the index of the Tpth language pair. For a specific language pair, given a sequence of source sentence input (xTp 1 , xTp 2 , · · · , xTp n ), the goal is to jointly maximize the conditional probability for each generated target word. The probability of generating the tth target word is estimated as: p(yTp t |yTp 1 , · · · , yTp t−1, xTp) = g(yTp t−1, sTp t , cTp t ) (2) where the function g is parameterized by a feedforward neural network with a softmax output layer. And g can be viewed as a probability predictor with neural networks. sTp t is a recurrent neural network hidden state at time t, which can be estimated as: sTp t = f(sTp t−1, yTp t−1, cTp t ) (3) the context vector cTp t depends on a sequence of annotations (h1, · · · , hLx) to which an encoder maps the input sentence, where Lx is the number of tokens in x. Each annotation hi is a bidirectional recurrent representation with forward and backward sequence information 1725 around the ith word. ctTp = Lx X j=1 aTp ij hj (4) where the weight aTp tj is a scalar computed by aTp tj = exp(eTp tj ) PL Tp x k=1 exp(eTp tk ) (5) eTp tj = φ(st−1Tp, hj) (6) aTp tj is a normalized score of etj which is a soft alignment model measuring how well the input context around the jth word and the output word in the tth position match. etj is modeled through a perceptron-like function: φ(x, y) = vT tanh(Wx + Uy) (7) To compute hj, a bidirectional recurrent neural network is used. In the bidirectional recurrent neural network, the representation of a forward sequence and a backward sequence of the input sentence is estimated and concatenated to be a single vector. This concatenated vector can be used to translate multiple languages during the test time. hj = [−→ hj; ←− hj]T (8) From a probabilistic perspective, our model is able to learn the conditional distribution of several target languages given the same source corpus. Thus, the recurrent encoder-decoders are jointly trained with several conditional probabilities added together. As for the bidirectional recurrent neural network module, we adopt the recently proposed gated recurrent neural network (Cho et al., 2014). The gated recurrent neural network is shown to have promising results in several sequence learning problem such as speech recognition and machine translation where input and output sequences are of variable length. It is also shown that the gated recurrent neural network has the ability to address the gradient vanishing problem compared with the traditional recurrent neural network, and thus the long-range dependency problem in machine translation can be handled well. In our multi-task learning framework, the parameters of the gated recurrent neural network in the encoder are shared, which is formulated as follows. ht = (I −zt) ⊙ht−1 + zt ⊙ˆht (9) zt = σ(Wzxt + Uzht−1) (10) ˆht = tanh(Wxt + U(rt ⊙ht−1)) (11) rt = σ(Wrxt + Urht−1) (12) Where I is identity vector and ⊙denotes element wise product between vectors. tanh(x) and σ(x) are nonlinear transformation functions that can be applied element-wise on vectors. The recurrent computation procedure is illustrated in 1, where xt denotes one-hot vector for the tth word in a sequence. Figure 1: Gated recurrent neural network computation, where rt is a reset gate responsible for memory unit elimination, and zt can be viewed as a soft weight between current state information and history information. tanh(x) = ex −e−x ex + e−x (13) σ(x) = 1 1 + e−x (14) The overall model is illustrated in Figure 2 where the multi-task learning framework with four target languages is demonstrated. The soft alignment parameters Ai for each encoderdecoder are different and only the bidirectional recurrent neural network representation is shared. 3.3 Optimization The optimization approach we use is the mini-batch stochastic gradient descent approach (Bottou, 1991). The only difference between our optimization and the commonly used stochastic gradient descent is that we learn several minibatches within a fixed language pair for several mini-batch iterations and then move onto the next language pair. Our optimization procedure is shown in Figure 3. 1726 Figure 2: Multi-task learning framework for multiple-target language translation Figure 3: Optimization for end to multi-end model 3.4 Translation with Beam Search Although parallel corpora are available for the encoder and the decoder modeling in the training phrase, the ground truth is not available during test time. During test time, translation is produced by finding the most likely sequence via beam search. ˆY = argmax Y p(YTp|STp) (15) Given the target direction we want to translate to, beam search is performed with the shared encoder and a specific target decoder where search space belongs to the decoder Tp. We adopt beam search algorithm similar as it is used in SMT system (Koehn, 2004) except that we only utilize scores produced by each decoder as features. The size of beam is 10 in our experiments for speedup consideration. Beam search is ended until the endof-sentence eos symbol is generated. 4 Experiments We conducted two groups of experiments to show the effectiveness of our framework. The goal of the first experiment is to show that multi-task learning helps to improve translation performance given enough training corpora for all language pairs. In the second experiment, we show that for some resource-poor language pairs with a few parallel training data, their translation performance could be improved as well. 4.1 Dataset The Europarl corpus is a multi-lingual corpus including 21 European languages. Here we only choose four language pairs for our experiments. The source language is English for all language pairs. And the target languages are Spanish (Es), French (Fr), Portuguese (Pt) and Dutch (Nl). To demonstrate the validity of our learning framework, we do some preprocessing on the training set. For the source language, we use 30k of the most frequent words for source language vocabulary which is shared across different language pairs and 30k most frequent words for each target language. Outof-vocabulary words are denoted as unknown words, and we maintain different unknown word labels for different languages. For test sets, we also restrict all words in the test set to be from our training vocabulary and mark the OOV words as the corresponding labels as in the training data. The size of training corpus in experiment 1 and 2 is listed in Table 1 where 1727 Training Data Information Lang En-Es En-Fr En-Nl En-Pt En-Nl-sub En-Pt-sub Sent size 1,965,734 2,007,723 1,997,775 1,960,407 300,000 300,000 Src tokens 49,158,635 50,263,003 49,533,217 49,283,373 8,362,323 8,260,690 Trg tokens 51,622,215 52,525,000 50,661,711 54,996,139 8,590,245 8,334,454 Table 1: Size of training corpus for different language pairs En-Nl-sub and En-Pt-sub are sub-sampled data set of the full corpus. The full parallel training corpus is available from the EuroParl corpus, downloaded from EuroParl public websites1. We mimic the situation that there are only a smallscale parallel corpus available for some language pairs by randomly sub-sampling the training data. The parallel corpus of English-Portuguese and English-Dutch are sub-sampled to approximately 15% of the full corpus size. We select two data Language pair En-Es En-Fr En-Nl En-Pt Common test 1755 1755 1755 1755 WMT2013 3000 3000 Table 2: Size of test set in EuroParl Common testset and WMT2013 sets as our test data. One is the EuroParl Common test set2 in European Parliament Corpus, the other is WMT 2013 data set3. For WMT 2013, only En-Fr, En-Es are available and we evaluate the translation performance only on these two test sets. Information of test sets is shown in Table 2. 4.2 Training Details Our model is trained on Graphic Processing Unit K40. Our implementation is based on the open source deep learning package Theano (Bastien et al., 2012) so that we do not need to take care about gradient computations. During training, we randomly shuffle our parallel training corpus for each language pair at each epoch of our learning process. The optimization algorithm and model hyper parameters are listed below. • Initialization of all parameters are from uniform distribution between -0.01 and 0.01. • We use stochastic gradient descent with recently proposed learning rate decay strategy Ada-Delta (Zeiler, 2012). 1http:www.statmt.orgeuroparl 2http://www.statmt.org/wmt14/test.tgz 3http://matrix.statmt.org/test sets • Mini batch size in our model is set to 50 so that the convergence speed is fast. • We train 1000 mini batches of data in one language pair before we switch to the next language pair. • For word representation dimensionality, we use 1000 for both source language and target language. • The size of hidden layer is set to 1000. We trained our multi-task model with a multiGPU implementation due to the limitation of Graphic memory. And each target decoder is trained within one GPU card, and we synchronize our source encoder every 1000 batches among all GPU card. Our model costs about 72 hours on full large parallel corpora training until convergence and about 24 hours on partial parallel corpora training. During decoding, our implementation on GPU costs about 0.5 second per sentence. 4.3 Evaluation We evaluate the effectiveness of our method with EuroParl Common testset and WMT 2013 dataset. BLEU-4 (Papineni et al., 2002) is used as the evaluation metric. We evaluate BLEU scores on EuroParl Common test set with multi-task NMT models and single NMT models to demonstrate the validity of our multi-task learning framework. On the WMT 2013 data sets, we compare performance of separately trained NMT models, multi-task NMT models and Moses. We use the EuroParl Common test set as a development set in both neural machine translation experiments and Moses experiments. For single NMT models and multi-task NMT models, we select the best model with the highest BLEU score in the EuroParl Common testset and apply it to the WMT 2013 dataset. Note that our experiment settings in NMT is equivalent with Moses, considering the same training corpus, development sets and test sets. 1728 4.4 Experimental Results We report our results of three experiments to show the validity of our methods. In the first experiment, we train multi-task learning model jointly on all four parallel corpora and compare BLEU scores with models trained separately on each parallel corpora. In the second experiment, we utilize the same training procedures as Experiment 1, except that we mimic the situation where some parallel corpora are resource-poor and maintain only 15% data on two parallel training corpora. In experiment 3, we test our learned model from experiment 1 and experiment 2 on WMT 2013 dataset. Table 3 and 4 show the case-insensitive BLEU scores on the Europarl common test data. Models learned from the multitask learning framework significantly outperform the models trained separately. Table 4 shows that given only 15% of parallel training corpus of English-Dutch and English-Portuguese, it is possible to improve translation performance on all the target languages as well. This result makes sense because the correlated languages benefit from each other by sharing the same predictive structure, e.g. French, Spanish and Portuguese, all of which are from Latin. We also notice that even though Dutch is from Germanic languages, it is also possible to increase translation performance under our multi-task learning framework which demonstrates the generalization of our model to multiple target languages. Lang-Pair En-Es En-Fr En-Nl En-Pt Single NMT 26.65 21.22 28.75 20.27 Multi Task 28.03 22.47 29.88 20.75 Delta +1.38 +1.25 +1.13 +0.48 Table 3: Multi-task neural translation v.s. single model given large-scale corpus in all language pairs We tested our selected model on the WMT 2013 dataset. Our results are shown in Table 5 where Multi-Full is the model with Experiment 1 setting and the model of Multi-Partial uses the same setting in Experiment 2. The English-French and English-Spanish translation performances are improved significantly compared with models trained separately on each language pair. Note Lang-Pair En-Es En-Fr En-Nl* En-Pt* Single NMT 26.65 21.22 26.59 18.26 Multi Task 28.29 21.89 27.85 19.32 Delta +1.64 +0.67 +1.26 +1.06 Table 4: Multi-task neural translation v.s. single model with a small-scale training corpus on some language pairs. * means that the language pair is sub-sampled. that this result is not comparable with the result reported in (Bahdanau et al., 2014) as we use much less training corpus. We also compare our trained models with Moses. On the WMT 2013 data set, we utilize parallel corpora for Moses training without any extra resource such as largescale monolingual corpus. From Table 5, it is shown that neural machine translation models have comparable BLEU scores with Moses. On the WMT 2013 test set, multi-task learning model outperforms both single model and Moses results significantly. 4.5 Model Analysis and Discussion We try to make empirical analysis through learning curves and qualitative results to explain why multi-task learning framework works well in multiple-target machine translation problem. From the learning process, we observed that the speed of model convergence under multi-task learning is faster than models trained separately especially when a model is trained for resourcepoor language pairs. The detailed learning curves are shown in Figure 4. Here we study the learning curve for resource-poor language pairs, i.e. English-Dutch and En-Portuguese, for which only 15% of the bilingual data is sampled for training. The BLEU scores are evaluated on the Europarl common test set. From Figure 4, it can be seen that in the early stage of training, given the same amount of training data for each language pair, the translation performance of the multi-task learning model is improved more rapidly. And the multi-task models achieve better translation quality than separately trained models within three iterations of training. The reason of faster and better convergence in performance is that the encoder parameters are shared across different language pairs, which can make full use of all the source language training data across the language pairs and improve the source language 1729 Nmt Baseline Nmt Multi-Full Nmt Multi-Partial Moses En-Fr 23.89 26.02(+2.13) 25.01(+1.12) 23.83 En-Es 23.28 25.31(+2.03) 25.83(+2.55) 23.58 Table 5: Multi-task NMT v.s. single model v.s. moses on the WMT 2013 test set Figure 4: Faster and Better convergence in Multi-task Learning in multiple language translation representation. The sharing of encoder parameters is useful especially for the resource-poor language pairs. In the multi-task learning framework, the amount of the source language is not limited by the resource-poor language pairs and we are able to learn better representation for the source language. Thus the representation of the source language learned from the multi-task model is more stable, and can be viewed as a constraint that leverages translation performance of all language pairs. Therefore, the overfitting problem and the data scarcity problem can be alleviated for language pairs with only a few training data. In Table 6, we list the three nearest neighbors of some source words whose similarity is computed by using the cosine score of the embeddings both in the multi-task learning framework (from Experiment two ) and in the single model (the resourcepoor English-Portuguese model). Although the nearest neighbors of the high-frequent words such as numbers can be learned both in the multi-task model and the single model, the overall quality of the nearest neighbors learned by the resource-poor single model is much poorer compared with the multi-task model. The multi-task learning framework also generates translations of higher quality. Some examples are shown in Table 7. The examples are from the MultiTask Nearest neighbors provide deliver 0.78, providing 0.74, give 0.72 crime terrorism 0.66, criminal 0.65, homelessness 0.65 regress condense 0.74, mutate 0.71, evolve 0.70 six eight 0.98,seven 0.96, 12 0.94 Single-Resource-Poor Nearest Neighbors provide though 0.67,extending 0.56, parliamentarians 0.44 crime care 0.75, remember 0.56, three 0.53 regress committing 0.33, accuracy 0.30, longed-for 0.28 six eight 0.87, three 0.69, thirteen 0.65 Table 6: Source language nearest-neighbor comparison between the multi-task model and the single model WMT 2013 test set. The French and Spanish translations generated by the multi-task learning model and the single model are shown in the table. 5 Conclusion In this paper, we investigate the problem of how to translate one source language into several different target languages within a unified translation model. Our proposed solution is based on the 1730 English Students, meanwhile, say the course is one of the most interesting around. Reference-Fr Les ´etudiants, pour leur part, assurent que le cours est l’ un des plus int´eressants. Single-Fr Les ´etudiants, entre-temps, disent entendu l’ une des plus int´eressantes. Multi-Fr Les ´etudiants, en attendant, disent qu’ il est l’ un des sujets les plus int´eressants. English In addition, they limited the right of individuals and groups to provide assistance to voters wishing to register. Reference-Fr De plus, ils ont limit´e le droit de personnes et de groupes de fournir une assistance aux ´electeurs d´esirant s’ inscrire. Single-Fr En outre, ils limitent le droit des particuliers et des groupes pour fournir l’ assistance aux ´electeurs. Multi-Fr De plus, ils restreignent le droit des individus et des groupes `a fournir une assistance aux ´electeurs qui souhaitent enregistrer. Table 7: Translation of different target languages given the same input in our multi-task model. recently proposed recurrent neural network based encoder-decoder framework. We train a unified neural machine translation model under the multitask learning framework where the encoder is shared across different language pairs and each target language has a separate decoder. To the best of our knowledge, the problem of learning to translate from one source to multiple targets has seldom been studied. Experiments show that given large-scale parallel training data, the multitask neural machine translation model is able to learn good predictive structures in translating multiple targets. Significant improvement can be observed from our experiments on the data sets publicly available. Moreover, our framework is able to address the data scarcity problem of some resource-poor language pairs by utilizing largescale parallel training corpora of other language pairs to improve the translation quality. Our model is efficient and gets faster and better convergence for both resource-rich and resource-poor language pair under the multi-task learning. In the future, we would like to extend our learning framework to more practical setting. For example, train a multi-task learning model with the same target language from different domains to improve multiple domain translation within one model. The correlation of different target languages will also be considered in the future work. Acknowledgement This paper is supported by the 973 program No. 2014CB340505. We would like to thank anonymous reviewers for their insightful comments. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learning Research, 6:1817–1853. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473. Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, James Bergstra, Ian J. Goodfellow, Arnaud Bergeron, Nicolas Bouchard, David Warde-Farley, and Yoshua Bengio. 2012. Theano: new features and speed improvements. CoRR, abs/1211.5590. L´eon Bottou. 1991. Stochastic gradient learning in neural networks. In Proceedings of Neuro-Nˆımes 91, Nimes, France. EC2. KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoderdecoder approaches. CoRR, abs/1409.1259. Trevor Cohn and Mirella Lapata. 2007. Machine translation by triangulation: Making effective use of multi-parallel corpora. In Proc. ACL, pages 728– 735. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Lei Cui, Xilun Chen, Dongdong Zhang, Shujie Liu, Mu Li, and Ming Zhou. 2013. Multi-domain adaptation for SMT using multi-task learning. In Proc. EMNLP, pages 1055–1065. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proc. ACL, pages 1370–1380. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proc. ACL, pages 699–709. 1731 Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun’ichi Tsujii. 2012. Incremental joint approach to word segmentation, POS tagging, and dependency parsing in chinese. In Proc. ACL, pages 1045–1053. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proc. EMNLP, pages 1700–1709. Philipp Koehn. 2004. Pharaoh: A beam search decoder for phrase-based statistical machine translation models. In Machine Translation: From Real Users to Research, 6th Conference of the Association for Machine Translation in the Americas, AMTA 2004, Washington, DC, USA, September 28-October 2, 2004, Proceedings, pages 115–124. Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, and Wenliang Chen. 2014. Joint optimization for chinese POS tagging and dependency parsing. IEEE/ACM Transactions on Audio, Speech & Language Processing, 22(1):274–286. Shujie Liu, Nan Yang, Mu Li, and Ming Zhou. 2014. A recursive recurrent neural network for statistical machine translation. In Proc. ACL, pages 1491– 1500. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. ACL, ACL 2002, pages 311–318, Stroudsburg, PA, USA. Association for Computational Linguistics. Rico Sennrich, Holger Schwenk, and Walid Aransa. 2013. A multi-domain translation model framework for statistical machine translation. In Proc. ACL, pages 832–840. Martin Sundermeyer, Tamer Alkhouli, Joern Wuebker, and Hermann Ney. 2014. Translation modeling with bidirectional recurrent neural networks. In Proc. EMNLP, pages 14–25. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems 2014, December 8-13 2014, Montreal, Quebec, Canada, pages 3104–3112. Hua Wu and Haifeng Wang. 2007. Pivot language approach for phrase-based statistical machine translation. In Proc. ACL, pages 165–181. Matthew D. Zeiler. 2012. ADADELTA: an adaptive learning rate method. CoRR, abs/1212.5701. 1732
2015
166
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1733–1743, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Accurate Linear-Time Chinese Word Segmentation via Embedding Matching Jianqiang Ma SFB 833 and Department of Linguistics University of Tübingen, Germany [email protected] Erhard Hinrichs SFB 833 and Department of Linguistics University of Tübingen, Germany [email protected] Abstract This paper proposes an embedding matching approach to Chinese word segmentation, which generalizes the traditional sequence labeling framework and takes advantage of distributed representations. The training and prediction algorithms have linear-time complexity. Based on the proposed model, a greedy segmenter is developed and evaluated on benchmark corpora. Experiments show that our greedy segmenter achieves improved results over previous neural network-based word segmenters, and its performance is competitive with state-of-the-art methods, despite its simple feature set and the absence of external resources for training. 1 Introduction Chinese sentences are written as character sequences without word delimiters, which makes word segmentation a prerequisite of Chinese language processing. Since Xue (2003), most work has formulated Chinese word segmentation (CWS) as sequence labeling (Peng et al., 2004) with character position tags, which has lent itself to structured discriminative learning with the benefit of allowing rich features of segmentation configurations, including (i) context of character/word ngrams within local windows, (ii) segmentation history of previous characters, or the combinations of both. These feature-based models still form the backbone of most state-of-the art systems. Nevertheless, many feature weights in such models are inevitably poorly estimated because the number of parameters is so large with respect to the limited amount of training data. This has motivated the introduction of low-dimensional, realvalued vectors, known as embeddings, as a tool to deal with the sparseness of the input. Embeddings allow linguistic units appearing in similar contexts to share similar vectors. The success of embeddings has been observed in many NLP tasks. For CWS, Zheng et al. (2013) adapted Collobert et al. (2011) and uses character embeddings in local windows as input for a two-layer network. The network predicts individual character position tags, the transitions of which are learned separately. Mansur et al. (2013) also developed a similar architecture, which labels individual characters and uses character bigram embeddings as additional features to compensate the absence of sentence-level modeling. Pei et al. (2014) improved upon Zheng et al. (2013) by capturing the combinations of context and history via a tensor neural network. Despite their differences, these CWS approaches are all sequence labeling models. In such models, the target character can only influence the prediction as features. Consider the the segmentation configuration in (1), where the dot appears before the target character in consideration and the box (2) represents any character that can occur in the configuration. In that example, the known history is that the first two characters 中国‘China’ are joined together, which is denoted by the underline. (1) 中国·2 格外(where 2 ∈{风, 规, ...}) (2) 中国风格外‘China-style especially’ (3) 中国规格外‘besides Chinese spec.’ For possible target characters, 风‘wind’ and 规 ‘rule’, the correct segmentation decisions for them are opposite, as shown in (2) and (3), respectively. In order to correctly predict both, current models can set higher weights for target character-specific features. However, in general, 风is more likely to start a new word instead of joining the existing one as in this example. Given such conflicting evidence, models can rarely find optimal feature weights, if they exist at all. 1733 The crux of this conflicting evidence problem is that similar configurations can suggest opposite decisions, depending on the target character and vice versa. Thus it might be useful to treat segmentation decisions for distinct characters separately. And instead of predicting general segmentation decisions given configurations, it could be beneficial to model the matching between configurations and character-specific decisions. To this end, this paper proposes an embedding matching approach (Section 2) to CWS, in which embeddings for both input and output are learned and used as representations to counteract sparsities. Thanks to embeddings of characterspecific decisions (actions) serving as both input features and output, our hidden-layer-free architecture (Section 2.2) is capable of capturing prediction histories in similar ways as the hidden layers in recurrent neural networks (Mikolov et al., 2010). We evaluate the effectiveness of the model via a linear-time greedy segmenter (Section 3) implementation. The segmenter outperforms previous embedding-based models (Section 4.2) and achieves state-of-the-art results (Section 4.3) on a benchmark dataset. The main contributions of this paper are: • A novel embedding matching model for Chinese word segmentation. • Developing a greedy word segmenter, which is based on the matching model and achieves competitive results. • Introducing the idea of character-specific segmentation action embeddings as both feature and output, which are cornerstones of the model and the segmenter. 2 Embedding Matching Models for Chinese Word Segmentation We propose an embedding based matching model for CWS, the architecture of which is shown in Figure 1. The model employs trainable embeddings to represent both sides of the matching, which will be specified shortly, followed by details of the architecture in Section 2.2. 2.1 Segmentation as Configuration-Action Matching Output. The word segmentation output of a character sequence can be described as a sequence of character-specific segmentation actions. We use separation (s) and combination (c) as possible actions for each character, where a separation action starts a new word with the current character, while a combination action appends the character to the preceding ones. We model character-action combinations instead of atomic, character independent actions. As a running example, sentence (4b) is the correct segmentation for (4a), which can be represented as the sequence (猫-s, 占-s, 领-c, 了-s, 婴-s, 儿-c, 床-c) . (4) a. 猫占领了婴儿床 b. 猫占领了婴儿床 c. ‘The cat occupied the crib’ Input. The input are the segmentation configurations for each character under consideration, which are described by context and history features. The context features of captures the characters that are in the same sentence of the current character and the history features encode the segmentation actions of previous characters. • Context features. These refer to character unigrams and bigrams that appear in the local context window of h characters that centers at ci, where ci is 领in example (4) and h = 5 is used in this paper. The template for features are shown in Table 1. For our example, the uni- and bi-gram features would be: 猫, 占, 领, 了, 婴and 猫占, 占领, 领了, 了 婴, respectively. • History features. To make inference tractable, we assume that only previous l character-specific actions are relevant, where l = 2 for this study. In our example, 猫-s and 战-s are the history features. Such features capture partial information of syntactic and semantic dependencies between previous words, which are clues for segmentation that pure character contexts could not provide. A dummy character START is used to represent the absent (left) context characters in the case of the first l characters in a sentence. And the predicted action for the START symbol is always s. Matching. CWS is now modeled as the matching of the input (segmentation configuration) and output (two possible character-specific actions) for each character. Formally, a matching model learns 1734 Figure 1: The architecture of the embedding matching model for CWS. The model predicts the segmentation for the character 领in sentence (4), which is the second character of word 占领‘occupy’. Both feature and output embeddings are trainable parameters of the model. Group Feature template unigram ci−2, ci−1, ci, ci+1, ci+2 bigram ci−2ci−1, ci−1ci, cici+1, ci+1ci+2 Table 1: Uni- and bi-gram feature template the following function: g ( b1b2...bn, a1a2...an) = n ∏ j=1 f ( bj(aj−2, aj−1; cj−h 2 ...cj+ h 2 ), aj ) (1) where c1c2...cn is the character sequence, bj and aj are the segmentation configuration and action for character cj, respectively. In (1), bj(aj−2, aj−1; cj−h 2 ...cj+ h 2 ) indicates that the configuration for each character is a function that depends on the actions of the previous l characters and the characters in the local window of size h. Why embedding. The above matching model would suffer from sparsity if these outputs (character-specific action aj) were directly encoded as one-hot vectors, since the matching model can be seen as a sequence labeling model with C ×L outputs, where L is the number of original labels while C is the number of unique characters. For Chinese, C is at the order of 103 −104. The use of embeddings, however, can serve the matching model well thanks to their low dimensionality. 2.2 The Architecture The proposed architecture (Figure 1) has three components, namely look-up table, concatenation and softmax function for matching. We will go through each of them in this section. Look-up table. The mapping between features/outputs to their corresponding embeddings are kept in a look-up table, as in many previous embedding related work (Bengio et al., 2003; Pei et al., 2014). Such features are extracted from the training data. Formally, the embedding for each distinct feature d is denoted as Embed(d) ∈RN, which is a real valued vector of dimension N. Each feature is retrieved by its unique index. The retrieval of the embeddings for the output actions is similar. Concatenation. To predict the segmentation for the target character cj, its feature vectors are concatenated into a single vector, the input embedding, i(bj) ∈RN×K, where K is the number of features used to describe the configuration bj. Softmax. The model then computes the dot product of the input embedding i(bj) and each of 1735 the two output embeddings, o(aj,1) and o(aj,2), which represent the two possible segmentation actions for the target character cj, respectively. The exponential of the two raw scores are normalized to obtain probabilistic values ∈[0, 1]. We call the resulting scores matching probabilities, which denote probabilities that actions match the given segmentation configuration. In our example, 领-c has the probability of 0.7 to be the correct action, while 领-s is less likely with a lower probability of 0.3. Formally, the above matching procedure can be described as a softmax function, as shown in (2), which is also an individual f term in (1). f( bj, aj,k) = exp (i(bj) · o(aj,k)) ∑ k′ exp ( i(bj) · o(aj,k′) ) (2) In (2), aj,k (1 ≤k ≤2) represent two possible actions, such as 领-c and 领-s for 领in our example. Note that, to ensure the input and output are of the same dimension, for each character specific action, the model trains two distinct embeddings, one ∈RN as feature and the other ∈RN×K as output, where K is the number of features for each input. Best word segmentation of sentence. After plugging (2) into (1) and applying (and then dropping) logarithms for computational convenience, finding the best segmentation for a sentence becomes an optimization problem as shown in (3). In the formula, ˆY is the best action sequence found by the model among all the possible ones, Y = a1a2...an, where aj is the predicted action for the character cj (1 ≤j ≤n), which is either cj-s or cj-c, such as 领-s and 领-c. ˆY = argmax Y n ∑ j=1 exp (i(bj) · o(aj)) ∑ k exp (i(bj) · o(aj,k)) (3) 3 The Greedy Segmenter Our model depends on the actions predicted for the previous two characters as history features. Traditionally, such scenarios call for dynamic programming for exact inference. However, preliminary experiments showed that, for our model, a Viterbi search based segmenter, even supported by conditional random field (Lafferty et al., 2001) style training, yields similar results as the greedy search based segmenter in this section. Since the greedy segmenter is much more efficient in training and testing, the rest of the paper will focus on the proposed greedy segmenter, the details of which will be described in this section. 3.1 Greedy Search Initialization. The first character in the sentence is made to have two left side characters that are dummy symbols of START, whose predicted actions are always START-s, i.e. separation. Iteration. The algorithms predicts the action for each character cj, one at a time, in a left-to-right, incremental manner, where 1 ≤j ≤n and n is the sentence length. To do so, it first extracts context features and history features, the latter of which are the predicted character-specific actions for the previous two characters. Then the model matches the concatenated feature embedding with embeddings of the two possible character-specific actions, cj-s and ci-c. The one with higher matching probability is predicted as segmentation action for the character, which is irreversible. After the action for the last character is predicted, the segmented word sequence of the sentence is built from the predicted actions deterministically. Hybrid matching. Character-specific embeddings are capable of capturing subtle word formation tendencies of individual characters, but such representations are incapable of covering matching cases for unknown target characters. Another minor issue is that the action embeddings for certain low frequent characters may not be sufficiently trained. To better deal with these scenarios, We also train two embeddings to represent character-independent segmentation actions, ALL-s and ALL-c, and use them to average with or substitute embeddings of infrequent or unknown characters, which are either insufficiently trained or nonexistent. Such strategy is called hybrid matching, which can improve accuracy. Complexity. Although the total number of actions is large, the matching for each target character only requires the two actions that correspond to that specific character, such as 领-s and 领-c for 领in our example. Each prediction is thus similar to a softmax computation with two outputs, which costs constant time C. Greedy search ensures that the total time for predicting a sentence of n characters is n × C, i.e. linear time complexity, with a minor overhead for mapping actions to segmentations. 1736 3.2 Training The training procedure first predicts the action for the current character with current parameters, and then optimizes the log likelihood of correct segmentation actions in the gold segmentations to update parameters. Ideally, the matching probability for the correct action embedding should be 1 while that of the incorrect one should be 0. We minimize the cross-entropy loss function as in (4) for the segmentation prediction of each character cj to pursue this goal. The loss function is convex, similar to that of maximum entropy models. J = − K ∑ k=1 δ (aj,k) log exp (i · o(aj,k)) ∑ k′ exp ( i · o(aj,k′) ) (4) where aj,k denotes a possible action for cj and i is a compact notation for i(bj). In (4), δ(aj,k) is an indicator function defined by the following formula, where ˆaj denotes the correct action. δ(aj,k) = { 1, if aj,k = ˆaj 0, otherwise To counteract over-fitting, we add L2 regularization term to the loss function, as follows: J = J + K ∑ k=1 λ 2 ( ||i||2 + ||o(aj,k)||2) (5) The formula in (4) and (5) are similar to that of a standard softmax regression, except that both input and output embeddings are parameters to be trained. We perform stochastic gradient descent to update input and output embeddings in turn, each time considering the other as constant. We give the gradient (6) and the update rule (7) for the input embedding i(bj) (i for short), where ok is a short notation for o(aj,k). The gradient and update for output embeddings are similar. The α in (7) is the learning rate, which we use a linear decay scheme to gradually shrink it from its initial value to zero. Note that the update for the input embedding i is actually performed for the feature embeddings that form i in the concatenation step. ∂J ∂i = ∑ k ( f (bj, aj,k) −δ (aj,k)) · ok + λi (6) i = i −α∂J ∂i (7) Complexity. For each iteration of the training process, the time complexity is also linear to the input character number, as compared with search, only a few constant time operations of gradient computation and parameter updates are performed for each character. 4 Experiments 4.1 Data and Evaluation Metric In the experiments, we use two widely used and freely available1 manually word-segmented corpora, namely, PKU and MSR, from the second SIGHAN international Chinese word segmentation bakeoff (Emerson, 2005). Table 2 shows the details of the two dataset. All evaluations in this paper are conducted with official training/testing set split using official scoring script.2 PKU MSR Word types 5.5 × 104 8.8 × 104 Word tokens 1.1 × 106 2.4 × 106 Character types 5 × 103 5 × 103 Character tokens 1.8 × 106 4.1 × 106 Table 2: Corpus details of PKU and MSR The segmentation accuracy is evaluated by precision (P), recall (R), F-score and Roov, the recall for out-of-vocabulary words. Precision is defined as the number of correctly segmented words divided by the total number of words in the segmentation result. Recall is defined as the number of correctly segmented words divided by the total number of words in the gold standard segmentation. In particular, Roov reflects the model generalization ability. The metric for overall performance, the evenly-weighted F-score is calculated as in (8): F = 2 × P × R P + R (8) To comply with CWS evaluation conventions and make comparisons fair, we distinguish the following two settings: • closed-set: no extra resource other than training corpora is used. • open-set: additional lexicon, raw corpora, etc are used. 1http://www.sighan.org/bakeoff2005/ 2http://www.sighan.org/bakeoff2003/score 1737 We will report the final results of our model3 on PKU and MSR corpora in comparison with previous embedding based models (Section 4.2) and state-of-the-art systems (Section 4.3), before going into detailed experiments for model analyses (Section 4.5). 4.2 Comparison with Previous Embedding-Based Models Table 3 shows the results of our greedy segmenter on the PKU and MSR datasets, which are compared with embedding-based segmenters in previous studies.4 In the table, results for both closedset and open-set setting are shown for previous models. In the open-set evaluations, all three previous work use pre-training to train character ngram embeddings from large unsegmented corpora to initialize the embeddings, which will be later trained with the manually word-segmented training data. For our model, we report the closeset results only, as pre-training does not significant improve the results in our experiments (Section 4.5). As shown in Table 3, under close-set evaluation, our model significantly outperform previous embedding based models in all metrics. Compared with the previous best embedding-based model, our greedy segmenter has achieved up to 2.2% and 25.8% absolute improvements (MSR) on F-score and Roov, respectively. Surprisingly, our close-set results are also comparable to the best open-set results of previous models. As we will see in (Section 4.4), when using same or less character uniand bi-gram features, our model still outperforms previous embedding based models in closed-set evaluation, which shows the effectiveness of our matching model. Significance test. Table 4 shows the 95% confidence intervals (CI) for close-set results of our model and the best performing previous model (Pei et al., 2014), which are computed by formula (9), following (Emerson, 2005). CI = 2 √ F(1 −F) N (9) where F is the F-score value and the N is the word token count of the testing set, which is 104,372 and 106,873 for PKU and MSR, respectively. We see 3Our implementation: https://zenodo.org/record/17645. 4The results for Zheng et al. (2013) are from the reimplementation of Pei et al. (2014). that the confidence intervals of our results do not overlap with that of (Pei et al., 2014), meaning that our improvements are statistically significant. 4.3 Comparison with the State-of-the-Art Systems Table 5 shows that the results of our greedy segmenter are competitive with the state-of-the-art supervised systems (Best05 closed-set, Zhang and Clark, 2007), although our feature set is much simpler. More recent state-of-the-art systems rely on both extensive feature engineering and extra raw corpora to boost performance, which are semi-supervised learning. For example, Zhang et al (2013) developed 8 types of static and dynamic features to maximize the co-training system that used extra corpora of Chinese Gigaword and Baike, each of which contains more than 1 billion character tokens. Such systems are not directly comparable with our supervised model. We leave the development of semi-supervised learning methods for our model as future work. 4.4 Features Influence Table 6 shows the F-scores of our model on PKU dataset when different features are removed (‘w/o’) or when only a subset of features are used. Features complement each other and removing any group of features leads to a limited drop of Fscore up to 0.7%. Note that features of previous (two) actions are even more informative than all unigram features combined, suggesting that intra- an inter-word dependencies reflected by action features are strong evidence for segmentation. Moreover, using same or less character ngram features, our model outperforms previous embedding based models, which shows the effectiveness of our matching model. 4.5 Model Analysis Learning curve. Figure 2 shows that the training procedure coverages quickly. After the first iteration, the testing F-scores are already 93.5% and 95.7% for PKU and MSR, respectively, which then gradually reach their maximum within the next 9 iterations before the curve flats out. Speed. With an unoptimized single-thread Python implementation running on a laptop with intel Core-i5 CPU (1.9 GHZ), each iteration of the training procedure on PKU dataset takes about 5 minutes, or 6,000 characters per second. The pre1738 Models PKU Corpus MSR Corpus P R F Roov P R F Roov Zheng et al.(2013) 92.8 92.0 92.4 63.3 92.9 93.6 93.3 55.7 + pre-training† 93.5 92.2 92.8 69.0 94.2 93.7 93.9 64.1 Mansur et al. (2013) 93.6 92.8 93.2 57.9 92.3 92.2 92.2 53.7 + pre-training† 94.0 93.9 94.0 69.5 93.1 93.1 93.1 59.7 Pei et al. (2014) 93.7 93.4 93.5 64.2 94.6 94.2 94.4 61.4 + pre-training† 94.4 93.6 94.0 69.0 95.2 94.6 94.9 64.8 + pre-training & bigram† 95.2 97.2 This work (closed-set) 95.5 94.6 95.1 76.0 96.6 96.5 96.6 87.2 Table 3: Comparison with previous embedding based models. Numbers in percentage. Results with † used extra corpora for (pre-)training. Models PKU MSR F CI F CI Pei et al. 93.5 ±0.15 94.4 ±0.14 This work 95.1 ±0.13 96.6 ±0.11 Table 4: Significance test of closed-set results of Pei et al (2014) and our model. Model PKU MSR Best05 closed-set 95.0 96.4 Zhang et al. (2006) 95.1 97.1 Zhang and Clark (2007) 94.5 97.2 Wang et al. (2012) 94.1 97.2 Sun et al. (2009) 95.2 97.3 Sun et al. (2012) 95.4 97.4 Zhang et al. (2013) † 96.1 97.4 This work 95.1 96.6 Table 5: Comparison with the state-of-the-art systems. Results with † used extra lexicon/raw corpora for training, i.e. in open-set setting. Best05 refers to the best closed-set results in 2nd SIGHAN bakeoff. diction speed is above 13,000 character per second. Hyper parameters. The hyper parameters used in the experiments are shown in Table 7. We initialized hyper parameters with recommendations in literature before tuning with dev-set experiments, each of which change one parameter by a magnitude. We fixed the hyper parameter to the current setting without spending too much time on tuning, since that is not the main purpose of this paper. • Embedding size determines the number of parameters to be trained, thus should fit the Feature F-score Feature F-score All features 95.1 uni-&bi-gram 94.6 w/o action 94.6 only action 93.3 w/o unigram 94.8 only unigram 92.1 w/o bigram 94.4 only bigram 94.2 Table 6: The influence of features. F-score in percentage on the PKU corpus. Figure 2: The learning curve of our model. training data size to achieve good performance. We tried the size of 30 and 100, both of which performs worse than 50. A possible tuning is to use different embedding size for different groups of features instead of setting N1 = 50 for all features. • Context window size. A window size of 3-5 characters achieves comparable results. Zheng et al. (2013) suggested that context window larger than 5 may lead to inferior results. • Initial Learning rate. We found that several learning rates between 0.04 to 0.15 yielded very similar results as the one reported here. The training is not very sensitive to reason1739 able values of initial learning rate. However, Instead of our simple linear decay of learning rate, it might be useful to try more sophisticated techniques, such as AdaGrad and exponential decaying (Tsuruoka et al., 2009; Sun et al., 2013). • Regularization. Our model suffers a little from over-fitting, if no regularization is used. In that case, the F-score on PKU drops from 95.1% to 94.7%. • Pre-training. We tried pre-training character embeddings using word2vec5 with Chinese Gigaword Corpus6 and use them to initialize the corresponding embeddings in our model, as previous work did. However, we were only able to see insignificant F-score improvements within 0.1% and observed that the training F-score reached 99.9% much earlier. We hypothesize that pre-training leads to sub-optimal local maximums for our model. • Hybrid matching. We tried applying hybrid matching (Section 3.1) for target characters which are less frequent than the top ftop characters, including unseen characters, which leads to about 0.15% of F-score improvements. Size of feature embed’ N1 = 50 Size of output embed’ N2 = 550 Window size h = 5 Initial learning rate α = 0.1 Regularization λ = 0.001 Hybrid matching ftop = 8% Table 7: Hyper parameters of our model. 5 Related Work Word segmentation. Most modern segmenters followed Xue (2003) to model CWS as sequence labeling of character position tags, using conditional random fields (Peng et al. 2004), structured perceptron (Jiang et al., 2008), etc. Some notable exceptions are (Zhang and Clark, 2007; Zhang et al., 2012), which exploited rich word-level features and (Ma et al., 2012; Ma, 2014; Zhang et al., 2014), which explicitly model word structures. Our work generalizes the sequence labeling to a 5https://code.google.com/p/word2vec/ 6https://catalog.ldc.upenn.edu/LDC2005T14 more flexible framework of matching, and predicts actions as in (Zhang and Clark, 2007; Zhang et al., 2012) instead of position tags to prevent the greedy search from suffering tag inconsistencies. To better utilize resources other than training data, our model might benefit from techniques used in recent state-of-the-art systems, such as semi-supervised learning (Zhao and Kit, 2008; Sun and Xu, 2011; Zhang et al., 2013; Zeng et al., 2013), joint models (Li and Zhou, 2012; Qian and Liu, 2012), and partial annotations (Liu et al., 2014; Yang and Vozila, 2014). Distributed representation and CWS. Distributed representation are useful for various NLP tasks, such as POS tagging (Collobert et al., 2011), machine translation (Devlin et al., 2014) and parsing (Socher et al., 2013). Influenced by Collobert et al. (2011), Zheng et al. (2013) modeled CWS as tagging and treated sentence-level tag sequence as the combination of individual tag predictions and context-independent tag transition. Mansur et al. (2013) was inspired by Bengio et al. (2003) and used character bigram embeddings to compensate for the absence of sentence level optimization. To model interactions between tags and characters, which are absent in these two CWS models, Pei et al. (2014) introduced the tag embedding and used a tensor hidden layer in the neural net. In contrast, our work uses character-specific action embeddings to explicitly capture such interactions. In addition, our work gains efficiency by avoiding hidden layers, similar as Mikolov et al. (2013). Learning to match. Matching heterogeneous objects has been studied in various contexts before, and is currently flourishing, thanks to embeddingbased deep (Gao et al., 2014) and convolutional (Huang et al., 2013; Hu et al., 2014) neural networks. This work develops a matching model for CWS and differs from others in its“shallow”yet effective architecture. 6 Discussion Simple architecture. It is possible to adopt standard feed-forward neural network for our embedding matching model with character-action embeddings as both feature and output. Nevertheless, we designed the proposed architecture to avoid hidden layers for simplicity, efficiency and easytuning, inspired by word2vec. Our simple architecture is effective, demonstrated by the improved results over previous neural-network word seg1740 menters, all of which use feed-forward architecture with different features and/or layers. It might be interesting to directly compare the performances of our model with same features on the current and feed-forward architectures, which we leave for future work. Greedy and exact search-based models. As mentioned in Section 3, we implemented and preliminarily experimented with a segmenter that trains a similar model with exact search via Viterbi algorithm. On the PKU corpus, its F-score is 0.944, compared with greedy segmenter’s 0.951. Its training and testing speed are up to 7.8 times slower than that of the greedy search segmenter. It is counter-intuitive that the performance of the exact-search segmenter is no better or even worse than that of the greedy-search segmenter. We hypothesize that since the training updates parameters with regard to search errors, the final model is “tailored” for the specific search method used, which makes the model-search combination of greedy search segmenter not necessarily worse than that of exact search segmenter. Another way of looking at it is that search is less important when the model is accurate. In this case, most step-wise decisions are correct in the first place, which requires no correction from the search algorithm. Empirically, Zhang and Clark (2011) also reported exact-search segmenter performing worse than beam-search segmenters. Despite that the greedy segmenter is incapable of considering future labels, this rarely causes problems in practice. Our greedy segmenter has good results, compared with the exact-search segmenter above and previous approaches, most of which utilize exact search. Moreover, the greedy segmenter has additional advantages of faster training and prediction. Sequence labeling and matching. A traditional sequence labeling model such as CRF has K (number of labels) target-character-independent weight vectors, where the target character influences the prediction via the weights of the features that contain it. In a way, a matching model can be seen as a family of “sub-models”, which keeps a group of weight vectors (the output embeddings) for each unique target character. Different target characters activate different sub-models, allowing opposite predictions for similar input features, as the target weight vectors used are different. 7 Conclusion and Future Work In this paper, we have introduced the matching formulation for Chinese word segmentation and proposed an embedding matching model to take advantage of distributed representations. Based on the model, we have developed a greedy segmenter, which outperforms previous embeddingbased methods and is competitive with state-ofthe-art systems. These results suggest that it is promising to model CWS as configuration-action matching using distributed representations. In addition, linear-time training and testing complexity of our simple architecture is very desirable for industrial application. To the best of our knowledge, this is the first greedy segmenter that is competitive with the state-of-the-art discriminative learning models. In the future, we plan to investigate methods for our model to better utilize external resources. We would like to try using convolutional neural network to automatically encode ngram-like features, in order to further shrink parameter space. It is also interesting to study whether extending our model with deep architectures can benefit CWS. Lastly, it might be useful to adapt our model to tasks such as POS tagging and name entity recognition. Acknowledgments The authors would like to thank the anonymous reviewers for their very helpful and constructive suggestions. We are indebted to Çağrı Çöltekin for discussion and comments, to Dale Gerdemann, Cyrus Shaoul, Corina Dima, Sowmya Vajjala and Helmut Schmid for their useful feedback on an earlier version of the manuscript. Financial support for the research reported in this paper was provided by the German Research Foundation (DFG) as part of the Collaborative Research Center “Emergence of Meaning” (SFB 833) and by the German Ministry of Education and Technology (BMBF) as part of the research grant CLARIN-D. References Yoshua Bengio, Réjean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from 1741 scratch. The Journal of Machine Learning Research, 12:2493–2537. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of ACL, pages 1370–1380. Thomas Emerson. 2005. The second international chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, volume 133. Jianfeng Gao, Patrick Pantel, Michael Gamon, Xiaodong He, Li Deng, and Yelong Shen. 2014. Modeling interestingness with deep neural networks. In Proceedings of EMNLP, pages 2–13. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proceedings of the ACM International Conference on Information & Knowledge Management, pages 2333–2338. Wenbin Jiang, Liang Huang, Qun Liu, and Yajuan Lü. 2008. A cascaded linear model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of ACL, pages 897–904. John Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: probabilistic models for segmenting and labeling sequence data. In Proceedings of International Conference on Machine Learning, pages 282–289. Zhongguo Li and Guodong Zhou. 2012. Unified dependency parsing of Chinese morphological and syntactic structures. In Proceedings of EMNLP, pages 1445–1454. Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for CRF-based Chinese word segmentation using free annotations. In Proceedings of EMNLP, pages 864–874. Jianqiang Ma, Chunyu Kit, and Dale Gerdemann. 2012. Semi-automatic annotation of Chinese word structure. In Proceedings of the Second CIPSSIGHAN Joint Conference on Chinese Language Processing, pages 9–17. Jianqiang Ma. 2014. Automatic refinement of syntactic categories in Chinese word structures. In Proceedings of LREC. Mairgup Mansur, Wenzhe Pei, and Baobao Chang. 2013. Feature-based neural language model and Chinese word segmentation. In Proceedings of IJCNLP, pages 1271–1277. Tomas Mikolov, Martin Karafiát, Lukas Burget, Jan Cernockỳ, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Proceedings of INTERSPEECH, pages 1045–1048. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Wenzhe Pei, Tao Ge, and Chang Baobao. 2014. Maxmargin tensor neural network for Chinese word segmentation. In Proceedings of ACL, pages 239–303. Fuchun Peng, Fangfang Feng, and Andrew McCallum. 2004. Chinese segmentation and new word detection using conditional random fields. In Proceedings of COLING, pages 562–571. Xian Qian and Yang Liu. 2012. Joint Chinese word segmentation, POS tagging and parsing. In Proceedings of EMNLP-CoNLL, pages 501–511. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In Proceedings of ACL, pages 455–465. Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of EMNLP, pages 970–979. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2009. A discriminative latent variable Chinese segmenter with hybrid word/character information. In Proceedings of NAACL, pages 56–64. Xu Sun, Houfeng Wang, and Wenjie Li. 2012. Fast online training with frequency-adaptive learning rates for Chinese word segmentation and new word detection. In Proceedings of ACL, pages 253–262. Xu Sun, Yaozhong Zhang, Takuya Matsuzaki, Yoshimasa Tsuruoka, and Jun’ichi Tsujii. 2013. Probabilistic Chinese word segmentation with non-local information and stochastic training. Information Processing & Management, 49(3):626–636. Yoshimasa Tsuruoka, Jun’ichi Tsujii, and Sophia Ananiadou. 2009. Stochastic gradient descent training for L1-regularized log-linear models with cumulative penalty. In Proceedings of ACL-IJCNLP, pages 477–485. Kun Wang, Chengqing Zong, and Keh-Yih Su. 2012. Integrating generative and discriminative characterbased models for Chinese word segmentation. ACM Transactions on Asian Language Information Processing (TALIP), 11(2):7. Nianwen Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. 1742 Fan Yang and Paul Vozila. 2014. Semi-supervised Chinese word segmentation using partial-label learning With conditional random fields. In Proceedings of EMNLP, page 90–98. Xiaodong Zeng, Derek F Wong, Lidia S Chao, and Isabel Trancoso. 2013. Graph-based semi-supervised model for joint Chinese word segmentation and partof-speech tagging. In Proceedings of ACL, pages 770–779. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In Proceedings of ACL, pages 840–847. Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized perceptron and beam search. Computational Linguistics, 37(1):105–151. Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Sumita. 2006. Subword-based tagging by conditional random fields for Chinese word segmentation. In Proceedings of NAACL, pages 193–196. Kaixu Zhang, Maosong Sun, and Changle Zhou. 2012. Word segmentation on Chinese mirco-blog data with a linear-time incremental model. In Proceedings of the 2nd CIPS-SIGHAN Joint Conference on Chinese Language Processing, pages 41–46. Longkai Zhang, Houfeng Wang, Xu Sun, and Maigup Mansur. 2013. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of EMNLP, pages 311– 321. Meishan Zhang, Yue Zhang, Wanxiang Che, and Ting Liu. 2014. Character-level Chinese dependency parsing. In Proceedings of ACL, pages 1326–1336. Hai Zhao and Chunyu Kit. 2008. Unsupervised segmentation helps supervised learning of character tagging for word segmentation and named entity recognition. In Proceedings of IJCNLP, pages 106–111. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep Learning for Chinese Word Segmentation and POS Tagging. In Proceedings of EMNLP, pages 647–657. 1743
2015
167
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1744–1753, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Gated Recursive Neural Network for Chinese Word Segmentation Xinchi Chen, Xipeng Qiu∗, Chenxi Zhu, Xuanjing Huang Shanghai Key Laboratory of Intelligent Information Processing, Fudan University School of Computer Science, Fudan University 825 Zhangheng Road, Shanghai, China {xinchichen13,xpqiu,czhu13,xjhuang}@fudan.edu.cn Abstract Recently, neural network models for natural language processing tasks have been increasingly focused on for their ability of alleviating the burden of manual feature engineering. However, the previous neural models cannot extract the complicated feature compositions as the traditional methods with discrete features. In this paper, we propose a gated recursive neural network (GRNN) for Chinese word segmentation, which contains reset and update gates to incorporate the complicated combinations of the context characters. Since GRNN is relative deep, we also use a supervised layer-wise training method to avoid the problem of gradient diffusion. Experiments on the benchmark datasets show that our model outperforms the previous neural network models as well as the state-of-the-art methods. 1 Introduction Unlike English and other western languages, Chinese do not delimit words by white-space. Therefore, word segmentation is a preliminary and important pre-process for Chinese language processing. Most previous systems address this problem by treating this task as a sequence labeling problem and have achieved great success. Due to the nature of supervised learning, the performance of these models is greatly affected by the design of features. These features are explicitly represented by the different combinations of context characters, which are based on linguistic intuition and statistical information. However, the number of features could be so large that the result models are too large to use in practice and prone to overfit on training corpus. ∗Corresponding author. Rainy 下 雨 Day 天 Ground 地 面 Accumulated water 积 水 M E S B Figure 1: Illustration of our model for Chinese word segmentation. The solid nodes indicate the active neurons, while the hollow ones indicate the suppressed neurons. Specifically, the links denote the information flow, where the solid edges denote the acceptation of the combinations while the dashed edges means rejection of that. As shown in the right figure, we receive a score vector for tagging target character “地” by incorporating all the combination information. Recently, neural network models have been increasingly focused on for their ability to minimize the effort in feature engineering. Collobert et al. (2011) developed a general neural network architecture for sequence labeling tasks. Following this work, many methods (Zheng et al., 2013; Pei et al., 2014; Qi et al., 2014) applied the neural network to Chinese word segmentation and achieved a performance that approaches the state-of-the-art methods. However, these neural models just concatenate the embeddings of the context characters, and feed them into neural network. Since the concatenation operation is relatively simple, it is difficult to model the complicated features as the traditional discrete feature based models. Although the complicated interactions of inputs can be modeled by the deep neural network, the previous neural model shows that the deep model cannot outperform the one with a single non-linear model. Therefore, the 1744 neural model only captures the interactions by the simple transition matrix and the single non-linear transformation . These dense features extracted via these simple interactions are not nearly as good as the substantial discrete features in the traditional methods. In this paper, we propose a gated recursive neural network (GRNN) to model the complicated combinations of characters, and apply it to Chinese word segmentation task. Inspired by the success of gated recurrent neural network (Chung et al., 2014), we introduce two kinds of gates to control the combinations in recursive structure. We also use the layer-wise training method to avoid the problem of gradient diffusion, and the dropout strategy to avoid the overfitting problem. Figure 1 gives an illustration of how our approach models the complicated combinations of the context characters. Given a sentence “雨 (Rainy) 天(Day) 地面(Ground) 积水(Accumulated water)”, the target character is “地”. This sentence is very complicated because each consecutive two characters can be combined as a word. To predict the label of the target character “地” under the given context, GRNN detects the combinations recursively from the bottom layer to the top. Then, we receive a score vector of tags by incorporating all the combination information in network. The contributions of this paper can be summarized as follows: • We propose a novel GRNN architecture to model the complicated combinations of the context characters. GRNN can select and preserve the useful combinations via reset and update gates. These combinations play a similar role in the feature engineering of the traditional methods with discrete features. • We evaluate the performance of Chinese word segmentation on PKU, MSRA and CTB6 benchmark datasets which are commonly used for evaluation of Chinese word segmentation. Experiment results show that our model outperforms other neural network models, and achieves state-of-the-art performance. 2 Neural Model for Chinese Word Segmentation Chinese word segmentation task is usually regarded as a character-based sequence labeling Input Window Characters Ci-2 Ci-1 Ci+1 Ci+2 Ci Lookup Table · · · · · · · · · · · · · · · 3 4 5 2 6 1 · · · d-1 d Features Linear W1 ×□+b1 · · · Number of Hidden Units Sigmoid g(□) · · · Number of Hidden Units Linear W2 ×□+b2 Number of tags · · · Tag Inference f(t|1) f(t|2) f(t|i) f(t|n-1) f(t|n) Aij Concatenate B E M S Figure 2: General architecture of neural model for Chinese word segmentation. problem. Each character is labeled as one of {B, M, E, S} to indicate the segmentation. {B, M, E} represent Begin, Middle, End of a multi-character segmentation respectively, and S represents a Single character segmentation. The general neural network architecture for Chinese word segmentation task is usually characterized by three specialized layers: (1) a character embedding layer; (2) a series of classical neural network layers and (3) tag inference layer. A illustration is shown in Figure 2. The most common tagging approach is based on a local window. The window approach assumes that the tag of a character largely depends on its neighboring characters. Firstly, we have a character set C of size |C|. Then each character c ∈C is mapped into an ddimensional embedding space as c ∈Rd by a lookup table M ∈Rd×|C|. For each character ci in a given sentence c1:n, the context characters ci−w1:i+w2 are mapped to their corresponding character embeddings as ci−w1:i+w2, where w1 and w2 are left and right context lengths respectively. Specifically, the unknown characters and characters exceeding the 1745 sentence boundaries are mapped to special symbols, “unknown”, “start” and “end” respectively. In addition, w1 and w2 satisfy the constraint w1 + w2 + 1 = w, where w is the window size of the model. As an illustration in Figure 2, w1, w2 and w are set to 2, 2 and 5 respectively. The embeddings of all the context characters are then concatenated into a single vector ai ∈RH1 as input of the neural network, where H1 = w × d is the size of Layer 1. And ai is then fed into a conventional neural network layer which performs a linear transformation followed by an element-wise activation function g, such as tanh. hi = g(W1ai + b1), (1) where W1 ∈RH2×H1, b1 ∈RH2, hi ∈RH2. H2 is the number of hidden units in Layer 2. Here, w, H1 and H2 are hyper-parameters chosen on development set. Then, a similar linear transformation is performed without non-linear function followed: f(t|ci−w1:i+w2) = W2hi + b2, (2) where W2 ∈R|T|×H2, b2 ∈R|T| and T is the set of 4 possible tags. Each dimension of vector f(t|ci−w1:i+w2) ∈R|T| is the score of the corresponding tag. To model the tag dependency, a transition score Aij is introduced to measure the probability of jumping from tag i ∈T to tag j ∈T (Collobert et al., 2011). 3 Gated Recursive Neural Network for Chinese Word Segmentation To model the complicated feature combinations, we propose a novel gated recursive neural network (GRNN) architecture for Chinese word segmentation task (see Figure 3). 3.1 Recursive Neural Network A recursive neural network (RNN) is a kind of deep neural network created by applying the same set of weights recursively over a given structure(such as parsing tree) in topological order (Pollack, 1990; Socher et al., 2013a). In the simplest case, children nodes are combined into their parent node using a weight matrix W that is shared across the whole network, followed by a non-linear function g(·). Specifically, if hL and hR are d-dimensional vector representations of left and right children nodes respectively, E M B S …… …… …… …… …… …… …… …… …… …… …… …… …… …… …… …… …… ci-2 ci-1 ci ci+1 ci+2 …… …… …… …… Linear xi yi = Ws × xi + bs Concatenate yi Figure 3: Architecture of Gated Recursive Neural Network for Chinese word segmentation. their parent node hP will be a d-dimensional vector as well, calculated as: hP = g ( W [ hL hR ]) , (3) where W ∈Rd×2d and g is a non-linear function as mentioned above. 3.2 Gated Recursive Neural Network The RNN need a topological structure to model a sequence, such as a syntactic tree. In this paper, we use a directed acyclic graph (DAG), as showing in Figure 3, to model the combinations of the input characters, in which two consecutive nodes in the lower layer are combined into a single node in the upper layer via the operation as Eq. (3). In fact, the DAG structure can model the combinations of characters by continuously mixing the information from the bottom layer to the top layer. Each neuron can be regarded as a complicated feature composition of its governed characters, similar to the discrete feature based models. The difference between them is that the neural one automatically learns the complicated combinations while the conventional one need manually design them. 1746 When the children nodes combine into their parent node, the combination information of two children nodes is also merged and preserved by their parent node. Although the mechanism above seem to work well, it can not sufficiently model the complicated combination features for its simplicity in practice. Inspired by the success of the gated recurrent neural network (Cho et al., 2014b; Chung et al., 2014), we propose a gated recursive neural network (GRNN) by introducing two kinds of gates, namely “reset gate” and “update gate”. Specifically, there are two reset gates, rL and rR, partially reading the information from left child and right child respectively. And the update gates zN, zL and zR decide what to preserve when combining the children’s information. Intuitively, these gates seems to decide how to update and exploit the combination information. In the case of word segmentation, for each character ci of a given sentence c1:n, we first represent each context character cj into its corresponding embedding cj, where i −w1 ≤j ≤i + w2 and the definitions of w1 and w2 are as same as mentioned above. Then, the embeddings are sent to the first layer of GRNN as inputs, whose outputs are recursively applied to upper layers until it outputs a single fixed-length vector. The outputs of the different neurons can be regarded as the different feature compositions. After concatenating the outputs of all neurons in the network, we get a new big vector xi. Next, we receive the tag score vector yi for character cj by a linear transformation of xi: yi = Ws × xi + bs, (4) where bs ∈R|T|, Ws ∈R|T|×Q. Q = q × d is dimensionality of the concatenated vector xi, where q is the number of nodes in the network. 3.3 Gated Recursive Unit GRNN consists of the minimal structures, gated recursive units, as showing in Figure 4. By assuming that the window size is w, we will have recursion layer l ∈[1, w]. At each recursion layer l, the activation of the j-th hidden node h(l) j ∈ Rd is computed as h(l) j = { zN ⊙ˆh l j + zL ⊙hl−1 j−1 + zR ⊙hl−1 j , l > 1, cj, l = 1, (5) Gate z Gate rL Gate rR hj-1 (l-1) hj (l-1) hj^(l) hj (l) Figure 4: Our proposed gated recursive unit. where zN, zL and zR ∈Rd are update gates for new activation ˆh l j, left child node hl−1 j−1 and right child node hl−1 j respectively, and ⊙indicates element-wise multiplication. The update gates can be formalized as: z =   zN zL zR  =   1/Z 1/Z 1/Z  ⊙exp(U   ˆh l j hl−1 j−1 hl−1 j  ), (6) where U ∈R3d×3d is the coefficient of update gates, and Z ∈Rd is the vector of the normalization coefficients, Zk = 3 ∑ i=1 [exp(U   ˆh l j hl−1 j−1 hl−1 j  )]d×(i−1)+k, (7) where 1 ≤k ≤d. Intuitively, three update gates are constrained by:            [zN]k + [zL]k + [zR]k = 1, 1 ≤k ≤d; [zN]k ≥0, 1 ≤k ≤d; [zL]k ≥0, 1 ≤k ≤d; [zR]k ≥0, 1 ≤k ≤d. (8) The new activation ˆh l j is computed as: ˆh l j = tanh(Wˆh [ rL ⊙hl−1 j−1 rR ⊙hl−1 j ] ), (9) where Wˆh ∈Rd×2d, rL ∈Rd, rR ∈Rd. rL and rR are the reset gates for left child node hl−1 j−1 and right child node hl−1 j respectively, which can be 1747 formalized as: [ rL rR ] = σ(G [ hl−1 j−1 hl−1 j ] ), (10) (11) where G ∈R2d×2d is the coefficient of two reset gates and σ indicates the sigmoid function. Intuiativly, the reset gates control how to select the output information of the left and right children, which results to the current new activation ˆh. By the update gates, the activation of a parent neuron can be regarded as a choice among the the current new activation ˆh, the left child, and the right child. This choice allows the overall structure to change adaptively with respect to the inputs. This gating mechanism is effective to model the combinations of the characters. 3.4 Inference In Chinese word segmentation task, it is usually to employ the Viterbi algorithm to inference the tag sequence t1:n for a given input sentence c1:n. In order to model the tag dependencies, the previous neural network models (Collobert et al., 2011; Zheng et al., 2013; Pei et al., 2014) introduce a transition matrix A, and each entry Aij is the score of the transformation from tag i ∈T to tag j ∈T. Thus, the sentence-level score can be formulated as follows: s(c1:n, t1:n, θ) = n ∑ i=1 ( Ati−1ti + fθ(ti|ci−w1:i+w2) ) , (12) where fθ(ti|ci−w1:i+w2) is the score for choosing tag ti for the i-th character by our proposed GRNN (Eq. (4)). The parameter set of our model is θ = (M, Ws, bs, Wˆh, U, G, A). 4 Training 4.1 Layer-wise Training Deep neural network with multiple hidden layers is very difficult to train for its problem of gradient diffusion and risk of overfitting. Following (Hinton and Salakhutdinov, 2006), we employ the layer-wise training strategy to avoid problems of overfitting and gradient vanishing. The main idea of layer-wise training is to train the network with adding the layers one by one. Specifically, we first train the neural network with the first hidden layer only. Then, we train at the network with two hidden layers after training at first layer is done and so on until we reach the top hidden layer. When getting convergency of the network with layers 1 to l , we preserve the current parameters as initial values of that in training the network with layers 1 to l + 1. 4.2 Max-Margin Criterion We use the Max-Margin criterion (Taskar et al., 2005) to train our model. Intuitively, the MaxMargin criterion provides an alternative to probabilistic, likelihood based estimation methods by concentrating directly on the robustness of the decision boundary of a model. We use Y (xi) to denote the set of all possible tag sequences for a given sentence xi and the correct tag sequence for xi is yi. The parameter set of our model is θ. We first define a structured margin loss ∆(yi, ˆy) for predicting a tag sequence ˆy for a given correct tag sequence yi: ∆(yi, ˆy) = n ∑ j η1{yi,j ̸= ˆyj}, (13) where n is the length of sentence xi and η is a discount parameter. The loss is proportional to the number of characters with an incorrect tag in the predicted tag sequence. For a given training instance (xi, yi), we search for the tag sequence with the highest score: y∗= arg max ˆy∈Y (x) s(xi, ˆy, θ), (14) where the tag sequence is found and scored by the proposed model via the function s(·) in Eq. (12). The object of Max-Margin training is that the tag sequence with highest score is the correct one: y∗= yi and its score will be larger up to a margin to other possible tag sequences ˆy ∈Y (xi): s(x, yi, θ) ≥s(x, ˆy, θ) + ∆(yi, ˆy). (15) This leads to the regularized objective function for m training examples: J(θ) = 1 m m ∑ i=1 li(θ) + λ 2 ∥θ∥2 2, (16) li(θ) = max ˆy∈Y (xi)(s(xi, ˆy, θ)+∆(yi, ˆy))−s(xi, yi, θ). (17) 1748 By minimizing this object, the score of the correct tag sequence yi is increased and score of the highest scoring incorrect tag sequence ˆy is decreased. The objective function is not differentiable due to the hinge loss. We use a generalization of gradient descent called subgradient method (Ratliff et al., 2007) which computes a gradient-like direction. Following (Socher et al., 2013a), we minimize the objective by the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatchs. The parameter update for the i-th parameter θt,i at time step t is as follows: θt,i = θt−1,i − α √∑t τ=1 g2 τ,i gt,i, (18) where α is the initial learning rate and gτ ∈R|θi| is the subgradient at time step τ for parameter θi. 5 Experiments We evaluate our model on two different kinds of texts: newswire texts and micro-blog texts. For evaluation, we use the standard Bakeoff scoring program to calculate precision, recall, F1-score. 5.1 Word Segmentation on Newswire Texts 5.1.1 Datasets We use three popular datasets, PKU, MSRA and CTB6, to evaluate our model on newswire texts. The PKU and MSRA data are provided by the second International Chinese Word Segmentation Bakeoff (Emerson, 2005), and CTB6 is from Chinese TreeBank 6.0 (LDC2007T36) (Xue et al., 2005), which is a segmented, part-of-speech tagged, and fully bracketed corpus in the constituency formalism. These datasets are commonly used by previous state-of-the-art models and neural network models. In addition, we use the first 90% sentences of the training data as training set and the rest 10% sentences as development set for PKU and MSRA datasets, and we divide the training, development and test sets according to (Yang and Xue, 2012) for the CTB6 dataset. All datasets are preprocessed by replacing the Chinese idioms and the continuous English characters and digits with a unique flag. 5.1.2 Hyper-parameters We set the hyper-parameters of the model as list in Table 1 via experiments on development set. In addition, we set the batch size to 20. And we Window size k = 5 Character embedding size d = 50 Initial learning rate α = 0.3 Margin loss discount η = 0.2 Regularization λ = 10−4 Dropout rate on input layer p = 20% Table 1: Hyper-parameter settings. 0 10 20 30 40 88 90 92 94 96 epoches F-value(%) 1 layer 2 layers 3 layers 4 layers 5 layers layer-wise Figure 5: Performance of different models with or without layer-wise training strategy on PKU development set. find that it is a good balance between model performance and efficiency to set character embedding size d = 50. In fact, the larger embedding size leads to higher cost of computational resource, while lower dimensionality of the character embedding seems to underfit according to the experiment results. Deep neural networks contain multiple nonlinear hidden layers are always hard to train for it is easy to overfit. Several methods have been used in neural models to avoid overfitting, such as early stop and weight regularization. Dropout (Srivastava et al., 2014) is also one of the popular strategies to avoid overfitting when training the deep neural networks. Hence, we utilize the dropout strategy in this work. Specifically, dropout is to temporarily remove the neuron away with a fixed probability p independently, along with the incoming and outgoing connections of it. As a result, we find dropout on the input layer with probability p = 20% is a good tradeoff between model efficiency and performance. 1749 models without layer-wise with layer-wise P R F P R F GRNN (1 layer) 90.7 89.6 90.2 GRNN (2 layers) 96.0 95.6 95.8 96.0 95.6 95.8 GRNN (3 layers) 95.9 95.4 95.7 96.0 95.7 95.9 GRNN (4 layers) 95.6 95.2 95.4 96.1 95.7 95.9 GRNN (5 layers) 95.3 94.7 95.0 96.1 95.7 95.9 Table 2: Performance of different models with or without layer-wise training strategy on PKU test set. 5.1.3 Layer-wise Training We first investigate the effects of the layer-wise training strategy. Since we set the size of context window to five, there are five recursive layers in our architecture. And we train the networks with the different numbers of recursion layers. Due to the limit of space, we just give the results on PKU dataset. Figure 5 gives the convergence speeds of the five models with different numbers of layers and the model with layer-wise training strategy on development set of PKU dataset. The model with one layer just use the neurons of the lowest layer in final linear score function. Since there are no non-linear layer, its seems to underfit and perform poorly. The model with two layers just use the neurons in the lowest two layers, and so on. The model with five layers use all the neurons in the network. As we can see, the layer-wise training strategy lead to the fastest convergence and the best performance. Table 2 shows the performances on PKU test set. The performance of the model with layer-wise training strategy is always better than that without layer-wise training strategy. With the increase of the number of layers, the performance also increases and reaches the stable high performance until getting to the top layer. 5.1.4 Results We first compare our model with the previous neural approaches on PKU, MSRA and CTB6 datasets as showing in Table 3. The character embeddings of the models are random initialized. The performance of word segmentation is significantly boosted by exploiting the gated recursive architecture, which can better model the combinations of the context characters than the previous neural models. Previous works have proven it will greatly improve the performance to exploit the pre-trained character embeddings instead of that with random initialization. Thus, we pre-train the embeddings on a huge unlabeled data, the Chinese Wikipedia corpus, with word2vec toolkit (Mikolov et al., 2013). By using these obtained character embeddings, our model receives better performance and still outperforms the previous neural models with pre-trained character embeddings. The detailed results are shown in Table 4 (1st to 3rd rows). Inspired by (Pei et al., 2014), we utilize the bigram feature embeddings in our model as well. The concept of feature embedding is quite similar to that of character embedding mentioned above. Specifically, each context feature is represented as a single vector called feature embedding. In this paper, we only use the simply bigram feature embeddings initialized by the average of two embeddings of consecutive characters element-wisely. Although the model of Pei et al. (2014) greatly benefits from the bigram feature embeddings, our model just obtains a small improvement with them. This difference indicates that our model has well modeled the combinations of the characters and do not need much help of the feature engineering. The detailed results are shown in Table 4 (4-th and 6-th rows). Table 5 shows the comparisons of our model with the state-of-the-art systems on F-value. The model proposed by Zhang and Clark (2007) is a word-based segmentation method, which exploit features of complete words, while remains of the list are all character-based word segmenters, whose features are mostly extracted from the context characters. Moreover, some systems (such as Sun and Xu (2011) and Zhang et al. (2013)) also exploit kinds of extra information such as the unlabeled data or other knowledge. Although our model only uses simple bigram features, it outperforms the previous state-of-the-art methods which use more complex features. 1750 models PKU MSRA CTB6 P R F P R F P R F (Zheng et al., 2013) 92.8 92.0 92.4 92.9 93.6 93.3 94.0* 93.1* 93.6* (Pei et al., 2014) 93.7 93.4 93.5 94.6 94.2 94.4 94.4* 93.4* 93.9* GRNN 96.0 95.7 95.9 96.3 96.1 96.2 95.4 95.2 95.3 Table 3: Performances on PKU, MSRA and CTB6 test sets with random initialized character embeddings. models PKU MSRA CTB6 P R F P R F P R F +Pre-train (Zheng et al., 2013) 93.5 92.2 92.8 94.2 93.7 93.9 93.9* 93.4* 93.7* (Pei et al., 2014) 94.4 93.6 94.0 95.2 94.6 94.9 94.2* 93.7* 94.0* GRNN 96.3 95.9 96.1 96.2 96.3 96.2 95.8 95.4 95.6 +bigram GRNN 96.6 96.2 96.4 97.5 97.3 97.4 95.9 95.7 95.8 +Pre-train+bigram (Pei et al., 2014) 95.2 97.2 GRNN 96.5 96.3 96.4 97.4 97.8 97.6 95.8 95.7 95.8 Table 4: Performances on PKU, MSRA and CTB6 test sets with pre-trained and bigram character embeddings. models PKU MSRA CTB6 (Tseng et al., 2005) 95.0 96.4 (Zhang and Clark, 2007) 95.1 97.2 (Sun and Xu, 2011) 95.7 (Zhang et al., 2013) 96.1 97.4 This work 96.4 97.6 95.8 Table 5: Comparison of GRNN with the state-ofthe-art methods on PKU, MSRA and CTB6 test sets. 5.2 Word Segmentation on Micro-blog Texts 5.2.1 Dataset We use the NLPCC 2015 dataset1 (Qiu et al., 2015) to evaluate our model on micro-blog texts. The NLPCC 2015 data are provided by the shared task in the 4th CCF Conference on Natural Language Processing & Chinese Computing (NLPCC 2015): Chinese Word Segmentation and POS Tagging for micro-blog Text. Different with the popular used newswire dataset, the NLPCC 2015 dataset is collected from Sina Weibo2, which consists of the relatively informal texts from micro-blog with the various topics, such as finance, sports, entertainment, and so on. The information of the dataset is 1http://nlp.fudan.edu.cn/nlpcc2015/ 2http://www.weibo.com/ shown in Table 6. To train our model, we also use the first 90% sentences of the training data as training set and the rest 10% sentences as development set. Here, we use the default setting of CRF++ toolkit with the feature templates as shown in Table 7. The same feature templates are also used for FNLP. 5.2.2 Results Since the NLPCC 2015 dataset is a new released dataset, we compare our model with the two popular open source toolkits for sequence labeling task: FNLP3 (Qiu et al., 2013) and CRF++4. Our model uses pre-trained and bigram character embeddings. Table 8 shows the comparisons of our model with the other systems on NLPCC 2015 dataset. 6 Related Work Chinese word segmentation has been studied with considerable efforts in the NLP community. The most popular word segmentation method is based on sequence labeling (Xue, 2003). Recently, researchers have tended to explore neural network 3https://github.com/xpqiu/fnlp/ 4http://taku910.github.io/crfpp/ *The result is from our own implementation of the corresponding method. 1751 Dataset Sents Words Chars Word Types Char Types OOV Rate Training 10,000 215,027 347,984 28,208 39,71 Test 5,000 106,327 171,652 18,696 3,538 7.25% Total 15,000 322,410 520,555 35,277 4,243 Table 6: Statistical information of NLPCC 2015 dataset. unigram feature c−2, c−1, c0, c+1, c+2 bigram feature c−1 ◦c0, c0 ◦c+1 trigram feature c−2◦c−1◦c0, c−1◦c0◦c+1, c0 ◦c+1 ◦c+2 Table 7: Templates of CRF++ and FNLP. models P R F CRF++ 93.3 93.2 93.3 FNLP 94.1 93.9 94.0 This work 94.7 94.8 94.8 Table 8: Performances on NLPCC 2015 dataset. based approaches (Collobert et al., 2011) to reduce efforts of the feature engineering (Zheng et al., 2013; Qi et al., 2014). However, the features of all these methods are the concatenation of the embeddings of the context characters. Pei et al. (2014) also used neural tensor model (Socher et al., 2013b) to capture the complicated interactions between tags and context characters. But the interactions depend on the number of the tensor slices, which cannot be too large due to the model complexity. The experiments also show that the model of (Pei et al., 2014) greatly benefits from the further bigram feature embeddings, which shows that their model cannot even handle the interactions of the consecutive characters. Different with them, our model just has a small improvement with the bigram feature embeddings, which indicates that our approach has well modeled the complicated combinations of the context characters, and does not need much help of further feature engineering. More recently, Cho et al. (2014a) also proposed a gated recursive convolutional neural network in machine translation task to solve the problem of varying lengths of sentences. However, their approach only models the update gate, which can not tell whether the information is from the current state or from sub notes in update stage without reset gate. Instead, our approach models two kinds of gates, reset gate and update gate, by incorporating which we can better model the combinations of context characters via selection function of reset gate and collection function of update gate. 7 Conclusion In this paper, we propose a gated recursive neural network (GRNN) to explicitly model the combinations of the characters for Chinese word segmentation task. Each neuron in GRNN can be regarded as a different combination of the input characters. Thus, the whole GRNN has an ability to simulate the design of the sophisticated features in traditional methods. Experiments show that our proposed model outperforms the state-of-the-art methods on three popular benchmark datasets. Despite Chinese word segmentation being a specific case, our model can be easily generalized and applied to other sequence labeling tasks. In future work, we would like to investigate our proposed GRNN on other sequence labeling tasks. Acknowledgments We would like to thank the anonymous reviewers for their valuable comments. This work was partially funded by the National Natural Science Foundation of China (61472088, 61473092), the National High Technology Research and Development Program of China (2015AA015408), Shanghai Science and Technology Development Funds (14ZR1403200), Shanghai Leading Academic Discipline Project (B114). References Kyunghyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014a. On the properties of neural machine translation: Encoder–decoder approaches. In Proceedings of Workshop on Syntax, Semantics and Structure in Statistical Translation. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Proceedings of EMNLP. 1752 Junyoung Chung, Caglar Gulcehre, KyungHyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555. Ronan Collobert, Jason Weston, Léon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. T. Emerson. 2005. The second international Chinese word segmentation bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 123–133. Jeju Island, Korea. Geoffrey E Hinton and Ruslan R Salakhutdinov. 2006. Reducing the dimensionality of data with neural networks. Science, 313(5786):504–507. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Wenzhe Pei, Tao Ge, and Chang Baobao. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of ACL. Jordan B Pollack. 1990. Recursive distributed representations. Artificial Intelligence, 46(1):77–105. Yanjun Qi, Sujatha G Das, Ronan Collobert, and Jason Weston. 2014. Deep learning for character-based information extraction. In Advances in Information Retrieval, pages 668–674. Springer. Xipeng Qiu, Qi Zhang, and Xuanjing Huang. 2013. FudanNLP: A toolkit for Chinese natural language processing. In Proceedings of Annual Meeting of the Association for Computational Linguistics. Xipeng Qiu, Peng Qian, Liusong Yin, and Xuanjing Huang. 2015. Overview of the NLPCC 2015 shared task: Chinese word segmentation and POS tagging for micro-blog texts. arXiv preprint arXiv:1505.07599. Nathan D Ratliff, J Andrew Bagnell, and Martin A Zinkevich. 2007. (online) subgradient methods for structured prediction. In Eleventh International Conference on Artificial Intelligence and Statistics (AIStats). Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013a. Parsing with compositional vector grammars. In In Proceedings of the ACL conference. Citeseer. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013b. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929–1958. Weiwei Sun and Jia Xu. 2011. Enhancing Chinese word segmentation using unlabeled data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 970–979. Association for Computational Linguistics. Ben Taskar, Vassil Chatalbashev, Daphne Koller, and Carlos Guestrin. 2005. Learning structured prediction models: A large margin approach. In Proceedings of the international conference on Machine learning. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Proceedings of the fourth SIGHAN workshop on Chinese language Processing, volume 171. Naiwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese TreeBank: Phrase structure annotation of a large corpus. Natural language engineering, 11(2):207–238. N. Xue. 2003. Chinese word segmentation as character tagging. Computational Linguistics and Chinese Language Processing, 8(1):29–48. Yaqin Yang and Nianwen Xue. 2012. Chinese comma disambiguation for discourse analysis. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long PapersVolume 1, pages 786–794. Association for Computational Linguistics. Yue Zhang and Stephen Clark. 2007. Chinese segmentation with a word-based perceptron algorithm. In ACL. Longkai Zhang, Houfeng Wang, Xu Sun, and Mairgup Mansur. 2013. Exploring representations from unlabeled data with co-training for Chinese word segmentation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Xiaoqing Zheng, Hanyang Chen, and Tianyu Xu. 2013. Deep learning for chinese word segmentation and pos tagging. In EMNLP, pages 647–657. 1753
2015
168
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1754–1764, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics An analysis of the user occupational class through Twitter content Daniel Preot¸iuc-Pietro1, Vasileios Lampos2 and Nikolaos Aletras2 1 Computer & Information Science, University of Pennsylvania 2 Department of Computer Science, University College London [email protected], {v.lampos,n.aletras}@ucl.ac.uk Abstract Social media content can be used as a complementary source to the traditional methods for extracting and studying collective social attributes. This study focuses on the prediction of the occupational class for a public user profile. Our analysis is conducted on a new annotated corpus of Twitter users, their respective job titles, posted textual content and platform-related attributes. We frame our task as classification using latent feature representations such as word clusters and embeddings. The employed linear and, especially, non-linear methods can predict a user’s occupational class with strong accuracy for the coarsest level of a standard occupation taxonomy which includes nine classes. Combined with a qualitative assessment, the derived results confirm the feasibility of our approach in inferring a new user attribute that can be embedded in a multitude of downstream applications. 1 Introduction The growth of online social networks provides the opportunity to analyse user text in a broader context (Tumasjan et al., 2010; Bollen et al., 2011; Lampos and Cristianini, 2012). This includes the social network (Sadilek et al., 2012), spatio-temporal information (Lampos and Cristianini, 2010) and personal attributes (Al Zamal et al., 2012). Previous research has analysed language differences in user attributes like location (Cheng et al., 2010), gender (Burger et al., 2011), impact (Lampos et al., 2014) and age (Rao et al., 2010), showing that language use is influenced by them. Therefore, user text allows us to infer these properties. This user profiling is important not only for sociolinguistic studies, but also for other applications: recommender systems to provide targeted advertising, analysts who study different opinions in each social class or integration in text regression tasks such as voting intention (Lampos et al., 2013). Social status reflected through a person’s occupation is a factor which influences language use (Bernstein, 1960; Bernstein, 2003; Labov, 2006). Therefore, our hypothesis is that language use in social media can be indicative of a user’s occupational class. For example, executives may write more frequently about business or financial news, while people in manufacturing positions could refer more to their personal interests and less to job related activities. Similarly, we expect some categories of people, like those working in sales and customer services, to be more social or to use more informal language. Focusing on the microblogging platform of Twitter, we explore our hypothesis by studying the task of predicting a user’s occupational class given platform-related attributes and generated content, i.e. tweets. That has direct applicability in a broad range of areas from sociological studies, which analyse the behaviour of different occupations, to recruiting companies that target people for new job opportunities. For this study, we created a publicly available data set of users, including their profile information and historical text content as well as a label to an occupational class from the “Standard Occupational Classification” taxonomy (see Section 2). We frame our task as classification, aiming to identify the most likely job class for a given user based on profile and a variety of textual features: general word embeddings and clusters (or ‘topics’). Both linear and non-linear classification methods are applied with a focus on those that can assist interpretation and offer qualitative insights. We find that text features, especially word clusters, lead to good predictive performance. Accuracy for our best model is well above 50% for 9-way classifi1754 cation, outperforming competitive methods. The best results are obtained using the Bayesian nonparametric framework of Gaussian Processes (Rasmussen and Williams, 2006), which also accommodates feature interpretation via the Automatic Relevance Determination. This allows us to get insight into differences in language use across job classes and, finally, assess our original hypothesis about the thematic divergence across them. 2 Standard Occupational Classification To enable the user occupation study, we adopt a standardised job classification taxonomy for mapping Twitter users to occupations. The Standard Occupational Classification (SOC)1 is a UK government system developed by the Office of National Statistics for classifying occupations. Jobs are categorised hierarchically based on skill requirements and content. The SOC scheme includes nine major groups coded with a digit from 1 to 9. Each major group is divided into sub-major groups coded with 2 digits, where the first digit indicates the major group. Each sub-major group is further divided into minor groups coded with 3 digits and finally, minor groups are divided into unit groups, coded with 4 digits. The unit groups are the leaves of the hierarchy and represent specific jobs related to the group. Table 1 shows a part of the SOC hierarchy. In total, there are 9 major groups, 25 sub-major groups, 90 minor groups and 369 unit groups. Although other hierarchies exist, we use the SOC because it has been published recently (in 2010), includes newly introduced jobs, has a balanced hierarchy and offers a wide variety of job titles that were crucial in our data set creation. 3 Data To the best of our knowledge there are no publicly available data sets suitable for the task we aim to investigate. Thus, we have created a new one consisting of Twitter users mapped to their occupation, together with their profile information and historical tweets. We use the account’s profile information to capture users with self-disclosed occupations. The potential self-selection bias is acknowledged, but filtering content via self disclosure 1http://www.ons.gov.uk/ons/ guide-method/classifications/ current-standard-classifications/ soc2010/index.html; accessed on 24/02/2015. Major Group 1 (C1): Managers, Directors and Senior Officials Sub-major Group 11: Corporate Managers and Directors Minor Group 111: Chief Executives and Senior Officials Unit Group 1115: Chief Executives and Senior Officials •Job: chief executive, bank manager Unit Group 1116: Elected Officers and Representatives Minor Group 112: Production Managers and Directors Minor Group 113: Functional Managers and Directors Minor Group 115: Financial Institution Managers and Directors Minor Group 116: Managers and Directors in Transport and Logistics Minor Group 117: Senior Officers in Protective Services Minor Group 118: Health and Social Services Managers and Directors Minor Group 119: Managers and Directors in Retail and Wholesale Sub-major Group 12: Other Managers and Proprietors Major Group (C2): Professional Occupations •Job: mechanical engineer, pediatrist Major Group (C3): Associate Professional and Technical Occupations •Job: system administrator, dispensing optician Major Group (C4): Administrative and Secretarial Occupations •Job: legal clerk, company secretary Major Group (C5): Skilled Trades Occupations •Job: electrical fitter, tailor Major Group (C6): Caring, Leisure and Other Service Occupations •Job: nursery assistant, hairdresser Major Group (C7): Sales and Customer Service Occupations •Job: sales assistant, telephonist Major Group (C8): Process, Plant and Machine Operatives •Job: factory worker, van driver Major Group (C9): Elementary Occupations •Job: shelf stacker, bartender Table 1: Subset of the SOC classification hierarchy. is widespread when extracting large-scale data for user attribute inference (Pennacchiotti and Popescu, 2011; Coppersmith et al., 2014). Similarly to Hecht et al. (2011), we first assess the proportion of Twitter accounts with a clear mention to their occupation by annotating the user description field of a random set of 500 users. There were chosen from the random 1% sample, having at least 200 tweets in their history and with a majority of English tweets. There, we can identify the following categories: no description (12.2%), random information (22%), user information but not occupation related (45.8%), and job related information (20%). To create our data set, we thus use the user description field to search for self-disclosed job titles provided by the 4-digit SOC unit groups, since they contain specific job titles. We queried Twitter’s Search API to retrieve for each job title a maximum of 200 accounts which best matched occupation keywords. Then, we aggregated the accounts into the 3-digit (minor) categories. To remove potential ambiguity in the retrieved set, we manually inspected accounts in each minor category and filtered out those that belong to companies, contain no description or the description provided does not indicate that the user has a job corresponding to the minor category. In total, around 50% of the accounts were removed by manual inspection per1755 formed by the authors. We also removed users in multiple categories and or users that have tweeted less than 50 times in their history. Finally, we eliminated all 3-digit categories that contained less than 45 user accounts after this filtering. This process produced a total number of 5,191 users from 55 minor groups (22 sub-major groups), spread across all nine major SOC groups. The distribution of users across these nine groups is: 9.7%, 34.5%, 20.6%, 3.8%, 16.7%, 6.1%, 1.4%, 4.2%, and 3% (following the ordering of Table 1). In our data set the most well represented minor occupational groups are ‘Functional Managers and Directors’ (184 users – code 113), ‘Therapy Professionals’ (159 users – code 222) and ‘Quality and Regulatory Professionals’ (158 users – code 246), whereas the least represented ones are ‘Textile and Garment Trades’ (45 users – code 541), ‘Elementary Security Occupations’ (46 users – code 924), ‘Elementary Cleaning Occupations’ (47 users – code 923). The mean number of users in the minor classes is equal to 94.4 with a standard deviation of 35.6. For these users, we have collected all their tweets, going as far back as the latest 3,200, and their profile information. The final data set consists of 10,796,836 tweets collected around 5 August 2014 and is openly available.2 A separate Twitter data set is used as a reference corpus in order to build the feature representations detailed in Section 4. This data set is an extract from the Twitter Gardenhose stream (a 10% representative sample of the entire Twitter stream) from 2 January to 28 February 2011. Based on this content, we also build the vocabulary for the text features, containing the most frequent 71,555 words. We tokenise and filter for English using the Trendminer preprocessing pipeline (Preot¸iuc-Pietro et al., 2012). 4 Features In this section, we overview the features used in the occupational class prediction task. They are divided into two types: (1) user level features, (2) textual features. 4.1 User Level Features (UserLevel) The user level features are based on the general user information or aggregated statistics about the tweets. Table 2 introduces the 18 features in this 2http://www.sas.upenn.edu/˜danielpr/ jobs.tar.gz u1 number of followers u2 number of friends u3 number of times listed u4 follower/friend ratio u5 proportion of non-duplicate tweets u6 proportion of retweeted tweets u7 average no. of retweets/tweet u8 proportion of retweets done u9 proportion of hashtags u10 proportion of tweets with hashtags u11 proportion of tweets with @-mentions u12 proportion of @-replies u13 no. of unique @-mentions in tweets u14 proportion of tweets with links u15 no. of favourites the account made u16 avg. number of tweets/day u17 total number of tweets u18 proportion of tweets in English Table 2: User level attributes for a Twitter user. category. 4.2 Textual Features The textual features are derived from the aggregated set of user’s tweets. We use our reference corpus to represent each user as a distribution over these features. We ignore the bio field from building textual features to avoid introducing biases from our data collection method. While this is a restriction, our analysis showed that in less than 20% of the cases the information in the bio is directly relevant to the occupation. 4.2.1 SVD Word Embeddings (SVD-E) We use a more abstract representation of words than simple unigram counts in order to aid interpretability of our analysis. We compute a word to word similarity matrix from our reference corpus. Normalised Pointwise Mutual Information (NPMI) (Bouma, 2009) is used to compute word to word similarity. NPMI is an information theoretic measure indicating which words co-occur in the same context, where the context is represented by a whole tweet: NPMI(x, y) = −log P(x, y) · log P(x, y) P(x) · P(y). (1) We then perform singular value decomposition (SVD) on the word to word similarity matrix and obtain an embedding of words into a low dimensional space. In our experiments we tried the following dimensionalities: 30, 50, 100 and 200. The feature representation for each user is obtained summing over each of the embedding dimensions across all words. 1756 4.2.2 NPMI Clusters (SVD-C) We use the NPMI matrix described in the previous paragraph to create hard clusters of words. These clusters can be thought as ‘topics’, i.e. words that are semantically similar. From a variety of clustering techniques we choose spectral clustering (Shi and Malik, 2000; Ng et al., 2002), a hard-clustering approach which deals well with high-dimensional and non-convex data (von Luxburg, 2007). Spectral clustering is based on applying SVD to the graph Laplacian and aims to perform an optimal graph partitioning on the NPMI similarity matrix. The number of clusters needs to be pre-specified. We use 30, 50, 100 and 200 clusters – numbers were chosen a priori based on previous work (Lampos et al., 2014). The feature representation is the standardised number of words from each cluster. Although there is a loss of information compared to the original representation, the clusters are very useful in the model analysis step. Embeddings are hard to interpret because each dimension is an abstract notion, while the clusters can be interpreted by presenting a list of the most frequent or representative words. The latter are identified using the following centrality metric: Cw = P x∈c NPMI(w, x) |c| −1 , (2) where c denotes the cluster and w the target word. 4.2.3 Neural Embeddings (W2V-E) Recently, there has been a growing interest in neural language models, where the words are projected into a lower dimensional dense vector space via a hidden layer (Mikolov et al., 2013b). These models showed they can provide a better representation of words compared to traditional language models (Mikolov et al., 2013c) because they capture syntactic information rather than just bag-of-context, handling non-linear transformations. In this low dimensional vector space, words with a small distance are considered semantically similar. We use the skipgram model with negative sampling (Mikolov et al., 2013a) to learn word embeddings on the Twitter reference corpus. In that case, the skip-gram model is factorising a word-context PMI matrix (Levy and Goldberg, 2014). We use a layer size of 50 and the Gensim implementation.3 3http://radimrehurek.com/gensim/ models/word2vec.html 4.2.4 Neural Clusters (W2V-C) Similar to the NPMI cluster, we use the neural embeddings in order to obtain clusters of related words, i.e. ‘topics’. We derive a word to word similarity matrix using cosine similarity on the neural embeddings. We apply spectral clustering on this matrix to obtain 30, 50, 100 and 200 word clusters. 5 Classification with Gaussian Processes In this section, we briefly overview Gaussian Process (GP) for classification, highlighting our motivation for using this method. GPs formulate a Bayesian non-parametric machine learning framework which defines a prior on functions (Rasmussen and Williams, 2006). The properties of the functions are given by a kernel which models the covariance in the response values as a function of its inputs. Although GPs form a powerful learning tool, they have only recently been used in NLP research (Cohn and Specia, 2013; Preot¸iuc-Pietro and Cohn, 2013) with classification applications limited to (Polajnar et al., 2011). Formally, GP methods aim to learn a function f : Rd →R drawn from a GP prior given the inputs xxx ∈Rd: f(xxx) ∼GP(m(xxx), k(xxx,xxx′)) , (3) where m(·) is the mean function (here 0) and k(·, ·) is the covariance kernel. Usually, the Squared Exponential (SE) kernel (a.k.a. RBF or Gaussian) is used to encourage smooth functions. For the multidimensional pair of inputs (xxx,xxx′), this is: kard(xxx,xxx′) = σ2 exp " d X i −(xi −x′ i)2 2l2 i # , (4) where li are lengthscale parameters learnt only using training data by performing gradient ascent on the type-II marginal likelihood. Intuitively, the lengthscale parameter li controls the variation along the i input dimension, i.e. a low value makes the output very sensitive to input data, thus making that input more useful for the prediction. If the lengthscales are learnt separately for each input dimension the kernel is named SE with Automatic Relevance Determination (ARD) (Neal, 1996). Binary classification using GPs ‘squashes’ the real valued latent function f(x) output through a logistic function: π(xxx) ≜P(y = 1|xxx) = σ(f(xxx)) in a similar way to logistic regression classification. The object of the GP inference is the distribution 1757 of the latent variable corresponding to a test case x∗: P(f∗|xxx,yyy, x∗) = Z P(f∗|xxx, x∗, f)P(f|xxx,yyy)df , (5) where P(f|xxx,yyy) = P(yyy|f)P(f|xxx)/P(yyy|xxx) is the posterior over the latent variables. If the likelihood P(yyy|f) is Gaussian, the combination with a GP prior P(f|xxx) gives a posterior GP over functions. In binary classification, the distribution over the latent f∗is combined with the logistic function to produce the prediction: ¯π∗= Z σ(f∗)P(f∗|xxx,yyy, x∗)df∗. (6) This results in a non-Gaussian likelihood in the posterior formulation and therefore, exact inference is infeasible for classification models. Multiple approximations exist that make the computation tractable (Gibbs and Mackay, 1997; Williams and Barber, 1998; Neal, 1999). In our experiments we opt to use the Expectation Propagation (EP) method (Minka, 2001) which approximates the nonGaussian joint posterior with a Gaussian one. EP offers very good empirical results for many different likelihoods, although it has no proof of convergence. The complexity for the inference step is O(n3). Given that our data set is very large and the number of features is high, we conduct inference using the fully independent training conditional (FITC) approximation (Snelson and Ghahramani, 2006) with 500 random inducing points. We refer the interested reader to Rasmussen and Williams (2006) for further information on GP classification. Although we could use multi-class classification methods, in order to provide insight, we perform a separate one-vs-all classification for each class and then determine a label through the occupational class that has the highest likelihood. 6 Experiments This section presents the experimental results for our task. We first compare the accuracy of our classification methods on held out data using each feature set and conduct a standard error analysis. We then use the interpretability of the ARD lengthscales from the GP classifier to further analyse the relevant features. 6.1 Predictive Accuracy We assign users to one of nine possible classes (see the ‘Major Groups’ on Table 1) using one set of Feature LR SVM GP Most frequent class 34.4% 34.4% 34.4% UserLevel 34.0% 31.5% 34.2% SVD-E-30 36.3% 35.0% 39.8% SVD-E-50 36.7% 36.9% 38.6% SVD-E-100 40.8% 41.9% 40.9% SVD-E-200 40.0% 43.1% 43.8% SVD-C-30 36.9% 36.5% 38.2% SVD-C-50 37.7% 38.3% 40.5% SVD-C-100 40.4% 42.1% 44.6% SVD-C-200 44.2% 47.9% 48.2% W2V-E-50 42.5% 49.0% 48.4% W2V-C-30 40.0% 46.0% 47.1% W2V-C-50 42.3% 48.5% 47.9% W2V-C-100 44.4% 48.7% 51.3% W2V-C-200 46.9% 51.7% 52.7% Table 3: 9-way classification accuracy on held-out data for our 3 methods. Textual features are obtained using SVD or Word2Vec (W2V). E represents embeddings, C clusters. The final number denotes the amount of clusters or the size of the embedding. features at a time. Experiments combining features yielded only minor improvements. We apply common linear and non-linear methods together with our proposed GP classifier. The linear method is logistic regression (LR) with Elastic Net regularisation (Freedman, 2009) and the non-linear one is formulated by a Support Vector Machine (SVM) with an RBF kernel (Vapnik, 1998). The accuracy of our classifiers is measured on held-out data. Our data set is divided into stratified training (80%), validation (10%) and testing (10%) sets. The validation set was used to learn the LR and SVM hyperparameters, while the GP did not use this set at all. We report results using all three methods and all feature sets in Table 3. We first observe that user level features (UserLevel; see Section 4.1) are not useful for predicting the job class. This finding indicates that general social behaviour or user impact are likely to be spread evenly across classes. It also highlights the difficulty of the task and motivates the use of deeper textual features. The textual features (see Section 4.2) improve performance as compared to the most frequent class baseline. We also notice that the embeddings (SVDE and W2V-E) have lower performance than the clusters (SVD-C and W2V-C) in most of the cases. This is expected, as adding word vectors to represent a user’s text may overemphasise common words. The size of the embedding also increases performance. The W2V features show better ac1758 Rank Topic # Label Topic (most central words; most frequent words) MRR µ(l) 1 116 Arts archival, stencil, canvas, minimalist, illustration, paintings, abstract, designs, lettering, steampunk; art, design, print, collection, poster, painting, custom, logo, printing, drawing .43 1.35 2 105 Health chemotherapy, diagnosis, disease, inflammation, diseases, arthritis, symptoms, patients, mrsa, colitis; risk, cancer, mental, stress, patients, treatment, surgery, disease, drugs, doctor .20 2.76 3 153 Beauty Care exfoliating, cleanser, hydrating, moisturizer, moisturiser, shampoo, lotions, serum, moisture, clarins; beauty, natural, dry, skin, massage, plastic, spray, facial, treatments, soap .19 3.69 4 21 Higher Education undergraduate, doctoral, academic, students, curriculum, postgraduate, enrolled, master’s, admissions, literacy; students, research, board, student, college, education, library, schools, teaching, teachers .18 3.21 5 158 Software Engineering integrated, data, implementation, integration, enterprise, configuration, open-source, cisco, proprietary, avaya; service, data, system, services, access, security, development, software, testing, standard .17 3.10 7 186 Football bardsley, etherington, gallas, heitinga, assou-ekotto, lescott, pienaar, warnock, ridgewell, jenas; van, foster, cole, winger, terry, reckons, youngster, rooney, fielding, kenny .16 3.11 8 124 Corporate consortium, institutional, firm’s, acquisition, enterprises, subsidiary, corp, telecommunications, infrastructure, partnership; patent, industry, reports, global, survey, leading, firm, 2015, innovation, financial .15 2.44 9 96 Cooking parmesan, curried, marinated, zucchini, roasted, coleslaw, salad, tomato, spinach, lentils; recipe, meat, salad, egg, soup, sauce, beef, served, pork, rice .15 3.00 12 164 Elongated Words yaaayy, wooooo, woooo, yayyyyy, yaaaaay, yayayaya, yayy, yaaaaaaay, wooohooo, yaayyy; wait, till, til, yay, ahhh, hoo, woo, woot, whoop, woohoo .11 3.47 16 176 Politics religious, colonialism, christianity, judaism, persecution, fascism, marxism, nationalism, communism, apartheid; human, culture, justice, religion, democracy, religious, humanity, tradition, ancient, racism .08 3.09 Table 4: Topics, represented by their most central and most frequent 10 words, sorted by their ARD lengthscale MRR across the nine GP-based occupation classifiers. µ(l) denotes the average lengthscale for a topic across these classifiers. Topic labels are manually created. 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 0.0 0.2 0.4 0.6 Figure 1: Confusion matrix of the prediction results. Rows represent the actual occupational class (C 1– 9) and columns the predicted class. curacy than the SVD on the NPMI matrix. This is consistent with previous work that showed the efficiency of word2vec and the ability of those embeddings to capture non-linear relationships and syntactic features (Mikolov et al., 2013a; Mikolov et al., 2013b; Mikolov et al., 2013c). LR has a lower performance than the non-linear methods, especially when using clusters as features. GPs usually outperform SVMs by a small margin. However, these offer the advantages of not using the validation set and the interpretability properties we highlight in the next section. Although we only draw our focus on major occupational classes, the data set allows the study of finer granularities of occupation classes in future work. For example, prediction performance for sub-major groups reaches 33.9% accuracy (15.6% majority class, 22 classes) and 29.2% accuracy for minor groups (3.4% majority class, 55 classes). 6.2 Error Analysis To illustrate the errors made by our classifiers, Figure 1 shows the confusion matrix of the classification results. First, we observe that class 4 is many times classified as class 2 or 3. This can be explained by the fact that classes 2, 3 and 4 contain similar types of occupations, e.g. doctors and nurses or accountants and assistant accountants. However, with very few exceptions, we notice that only adjacent classes get misclassified, suggesting 1759 that our model captures the general user skill level. 6.3 Qualitative Analysis The word clusters that were built from a reference corpus and then used as features in the GP classifier, give us the opportunity to extract some qualitative derivations from our predictive task. For the rest of the section we use the best performing model of this type (W2V-C-200) in order to analyse the results. Our main assumption is that there might be a divergence of language and topic usage across occupational classes following previous studies in sociology (Bernstein, 1960; Bernstein, 2003). Knowing that the inferred GP lengthscale hyperparameters are inversely proportional to feature (i.e. topic) relevance (see Section 5), we can use them to rank the topic importance and give answers to our hypothesis. Table 4 shows 10 of the most informative topics (represented by the top 10 most central and frequent words) sorted by their ARD lengthscale Mean Reciprocal Rank (MRR) (Manning et al., 2008) across the nine classifiers. Evidently, they cover a broad range of thematic subjects, including potentially work specific topics in different domains such as ‘Corporate’ (Topic #124), ‘Software Engineering’ (#158), ‘Health’ (#105), ‘Higher Education’ (#21) and ‘Arts’ (#116), as well as topics covering recreational interests such as ‘Football’ (#186), ‘Cooking’ (#96) and ‘Beauty Care’ (#153). The highest ranked MRR GP lengthscales only highlight the topics that are the most discriminative of the particular learning task, i.e. which topic used alone would have had the best performance. To examine the difference in topic usage across occupations, we illustrate how six topics are covered by the users of each class. Figure 2 shows the Cumulative Distribution Functions (CDFs) across the nine different occupational classes for these six topics. CDFs indicate the fraction of users having at least a certain topic proportion in their tweets. A topic is more prevalent in a class, if the CDF line leans towards the bottom-right corner of the plot. ‘Higher Education’ (#21) is more prevalent in classes 1 and 2, but is also discriminative for classes 3 and 4 compared to the rest. This is expected because the vast majority of jobs in these classes require a university degree (holds for all of the jobs in classes 2 and 3) or are actually jobs in higher education. On the other hand, classes 5 to 9 have a similar behaviour, tweeting less on this topic. We also observe that words in ‘Corporate’ (#124) are used more as the skill required for a job gets higher. This topic is mainly used by people in classes 1 and 2 and with less extent in classes 3 and 4, indicating that people in these occupational classes are more likely to use social media for discussions about corporate business. There is a clear trend of people with more skilled jobs to talk about ‘Politics’ (#176). Indeed, highly ranked politicians and political philosophers are parts of classes 1 and 2 respectively. Nevertheless, this pattern expands to the entire spectrum of the investigated occupational classes, providing further proof-of-concept for our methodology, under the assumption that the theme of politics is more attractive to the higher skilled classes rather than the lower skilled occupations. By examining ‘Arts’ (#116), we see that it clearly separates class 5, which includes artists, from all others. This topic appears to be relevant to most of the classification tasks and it is ranked first according to the MRR metric. Moreover, we observe that people with higher skilled jobs and education (classes 1–3) post more content about arts. Finally, we examine two topics containing words that can be used in more informal occasions, i.e. ‘Elongated Words’ (#164) and ‘Beauty Care’ (#153). We observe a similar pattern in both topics by which users with lower skilled jobs tweet more often. 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 0.00 0.01 0.02 0.03 Figure 3: Jensen-Shannon divergence in the topic distributions between the different occupational classes (C 1–9). The main conclusion we draw from Figure 2 is that there exists a topic divergence between users in the lower vs. higher skilled occupational classes. To examine this distinction better, we use the JensenShannon divergence (JSD) to quantify the difference between the topic distributions across every 1760 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 User probability Higher Education (#21) C1 C2 C3 C4 C5 C6 C7 C8 C9 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 Corporate (#124) C1 C2 C3 C4 C5 C6 C7 C8 C9 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 User probability Politics (#176) C1 C2 C3 C4 C5 C6 C7 C8 C9 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 Arts (#116) C1 C2 C3 C4 C5 C6 C7 C8 C9 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 Topic proportion User probability Beauty Care (#153) C1 C2 C3 C4 C5 C6 C7 C8 C9 0.001 0.01 0.05 0 0.2 0.4 0.6 0.8 1 Topic proportion Elongated Words (#164) C1 C2 C3 C4 C5 C6 C7 C8 C9 Figure 2: CDFs for six of the most important topics; the x-axis is on the log-scale for display purposes. A point on a CDF line indicates the fraction of users (y-axis point) with a topic proportion in their tweets lower or equal to the corresponding x-axis point. The topic is more prevalent in a class, if the CDF line leans closer to the bottom-right corner of the plot. class pair. Figure 3 visualises these differences. There, we confirm that adjacent classes use similar topics of discussion. We also notice that JSD increases as the classes are further apart. Two main groups of related classes, with a clear separation from the rest, are identified: classes 1–2 and 6–9. For the users belonging to these two groups, we compute their topic usage distribution (for the top topics listed in Table 4). Then, we assess whether the topic usage distributions of those super-classes of occupations have a statistically significant difference by performing a two-sample KolmogorovSmirnov test. We enumerate the group topic usage means in Table 5; all differences were indeed statistically significant (p < 10−5). From this comparison, we conclude that users in the higher skilled classes have a higher representation in all top topics but ‘Beauty Care’ and ‘Elongated Words’. Hence, the original hypothesis about the difference in the usage of language between upper and lower occupational classes is reconfirmed in this more generic testing. A very noticeable difference occurs for the 1761 Topics C 1–2 C 6–9 Arts 4.95 2.79 Health 4.45 2.13 Beauty Care 1.40 2.24 Higher Education 6.04 2.56 Software Engineering 6.31 2.54 Football 0.54 0.52 Corporate 5.15 1.41 Cooking 2.81 2.49 Elongated Words 1.90 3.78 Politics 2.14 1.06 Table 5: Comparison of mean topic usage for super-sets (classes 1–2 vs. 6–9) of the occupational classes; all values were multiplied by 103. The difference between the topic usage distributions was statistically significant (p < 10−5). ‘Corporate’ topic, whereas ‘Football’ registers the lowest distance. 7 Related Work Occupational class prediction has been studied in the past in the areas of psychology and economics. French (1959) investigated the relation between various measures on 232 undergraduate students and their future occupations. This study concluded that occupational membership can be predicted from variables such as the ability of subjects in using mathematical and verbal symbols, their family economic status, body-build and personality components. Schmidt and Strauss (1975) also studied the relationship between job types (five classes) and certain demographic attributes (gender, race, experience, education, location). Their analysis identified biases or discrimination which possibly exist in different types of jobs. Sociolinguistic and sociology studies deduct that social status is an important factor in determining the use of language (Bernstein, 1960; Bernstein, 2003; Labov, 2006). Differences arise either due to language use or due to the topics people discuss as parts of various social domains. However, a large scale investigation of this hypothesis has never been attempted. Relevant to our task is a relation extraction approach proposed by Li et al. (2014) aiming to extract user profile information on Twitter. They used a weakly supervised approach to obtain information for job, education and spouse. Nonetheless, the information relevant to the job attribute regards the employer of a user (i.e. the name of a company) rather than the type of occupation. In addition, Huang et al. (2014) proposed a method to classify Sina Weibo users to twelve predefined occupations using content based and network features. However, there exist significant differences from our task since this inference is based on a distinct platform, with an ambiguous distribution over occupations (e.g. more than 25% related to media), while the occupational classes are not generic (e.g. media, welfare and electronic are three of the twelve categories). Most importantly, the applied model did not allow for a qualitative interpretation. Filho et al. (2014) inferred the social class of social media users by combining geolocation information derived from Foursquare and Twitter posts. Recently, Sloan et al. (2015) introduced tools for the automated extraction of demographic data (age, occupation and social class) from the profile descriptions of Twitter users using a similar method to our data set extraction approach. They showed that it is feasible to build a data set that matches the real-world UK occupation distribution as given by the SOC. 8 Conclusions Our paper presents the first large-scale systematic study on language use on social media as a factor for inferring a user’s occupational class. To address this problem, we have also introduced an extensive labelled data set extracted from Twitter. We have framed prediction as a classification task and, to this end, we used the powerful, non-linear GP framework that combines strong predictive performance with feature interpretability. Results show that we can achieve a good predictive accuracy, highlighting that the occupation of a user influences text use. Through a qualitative analysis, we have shown that the derived topics capture both occupation specific interests as well as general class-based behaviours. We acknowledge that the derivations of this study, similarly to other studies in the field, are reflecting the Twitter population and may experience a bias introduced by users self-mentioning their occupations. However, the magnitude, occupational diversity and face validity of our conclusions suggest that the presented approach is useful for future downstream applications. 1762 Acknowledgements DP-P acknowledges the support from Templeton Religion Trust, grant TRT-0048. VL and NA acknowledge the support from EPSRC (UK) project EP/K031953/1. We thank Mark Stevenson for his critical comments on early drafts of this paper. References Faiyaz Al Zamal, Wendy Liu, and Derek Ruths. 2012. Homophily and Latent Attribute Inference: Inferring Latent Attributes of Twitter Users from Neighbors. In Proc. of 6th International Conference on Weblogs and Social Media, pages 387–390. Basil Bernstein. 1960. Language and social class. British Journal of Sociology, pages 271–276. Basil Bernstein. 2003. Class, codes and control: Applied studies towards a sociology of language, volume 2. Psychology Press. Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1–8. Gerlof Bouma. 2009. Normalized (pointwise) mutual information in collocation extraction. In Biennial GSCL Conference, pages 31–40. D. John Burger, John Henderson, George Kim, and Guido Zarrella. 2011. Discriminating Gender on Twitter. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1301–1309. Zhiyuan Cheng, James Caverlee, and Kyumin Lee. 2010. You are where you tweet: a content-based approach to geo-locating twitter users. In Proceedings of the 19th ACM Conference on Information and Knowledge Management, CIKM, pages 759–768. Trevor Cohn and Lucia Specia. 2013. Modelling annotator bias with multi-task gaussian processes: An application to machine translation quality estimation. In 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 32–42. Glen Coppersmith, Craig Harman, and Mark Dredze. 2014. Measuring post traumatic stress disorder in twitter. In International Conference on Weblogs and Social Media, ICWSM. Renato Miranda Filho, Guilherme R. Borges, Jussara M. Almeida, and Gisele L. Pappa. 2014. Inferring user social class in online social networks. In Proceedings of the 8th Workshop on Social Network Mining and Analysis, SNAKDD’14, pages 10:1– 10:5. David Freedman. 2009. Statistical models: theory and practice. Cambridge University Press. Wendell L French. 1959. Can a man’s occupation be predicted? Journal of Counseling Psychology, 6(2):95. Mark Gibbs and David J. C. Mackay. 1997. Variational gaussian process classifiers. IEEE Transactions on Neural Networks, 11:1458–1464. Brent Hecht, Lichan Hong, Bongwon Suh, and Ed H. Chi. 2011. Tweets from justin bieber’s heart: The dynamics of the location field in user profiles. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI. Yanxiang Huang, Lele Yu, Xiang Wang, and Bin Cui. 2014. A multi-source integration framework for user occupation inference in social media systems. World Wide Web, pages 1–21. William Labov. 2006. The Social Stratification of English in New York City. Cambridge University Press, second edition. Vasileios Lampos and Nello Cristianini. 2010. Tracking the flu pandemic by monitoring the Social Web. In Proc. of the 2nd International Workshop on Cognitive Information Processing, pages 411–416. Vasileios Lampos and Nello Cristianini. 2012. Nowcasting Events from the Social Web with Statistical Learning. ACM Transactions on Intelligent Systems and Technology, 3(4):72:1–72:22. Vasileios Lampos, Daniel Preot¸iuc-Pietro, and Trevor Cohn. 2013. A user-centric model of voting intention from Social Media. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 993–1003. Vasileios Lampos, Nikolaos Aletras, Daniel Preot¸iucPietro, and Trevor Cohn. 2014. Predicting and characterising user impact on Twitter. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, EACL, pages 405–413. Omer Levy and Yoav Goldberg. 2014. Neural word embeddings as implicit matrix factorization. In Advances in Neural Information Processing Systems, NIPS, pages 2177–2185. Jiwei Li, Alan Ritter, and Eduard H. Hovy. 2014. Weakly supervised user profile extraction from twitter. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL, pages 165–174. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of Workshop at the International Conference on Learning Representations, ICLR. 1763 Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, NIPS, pages 3111–3119. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In Proceedings of the 2010 annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL, pages 746–751. Thomas P. Minka. 2001. Expectation propagation for approximate bayesian inference. In Proceedings of the 17th Conference in Uncertainty in Artificial Intelligence, UAI ’01. Radford M. Neal. 1996. Bayesian Learning for Neural Networks. Springer-Verlag New York, Inc. Radford M. Neal. 1999. Regression and classification using gaussian process priors. Bayesian Statistics 6, pages 475–501. Andrew Y. Ng, Michael I. Jordan, and Yair Weiss. 2002. On spectral clustering: Analysis and an algorithm. In Advances in Neural Information Processing Systems, NIPS, pages 849–856. Marco Pennacchiotti and Ana-Maria Popescu. 2011. A machine learning approach to twitter user classification. ICWSM, pages 281–288. Tamara Polajnar, Simon Rogers, and Mark Girolami. 2011. Protein interaction detection in sentences via gaussian processes; a preliminary evaluation. International Journal of Data Mining and Bioinformatics, 5(1):52–72. Daniel Preot¸iuc-Pietro and Trevor Cohn. 2013. A temporal model of text periodicities using Gaussian Processes. EMNLP. Daniel Preot¸iuc-Pietro, Sina Samangooei, Trevor Cohn, Nicholas Gibbins, and Mahesan Niranjan. 2012. Trendminer: An architecture for real time analysis of social media text. In Workshop on Real-Time Analysis and Mining of Social Streams (RAMSS), ICWSM. Delip Rao, David Yarowsky, Abhishek Shreevats, and Manaswi Gupta. 2010. Classifying Latent User Attributes in Twitter. In Proceedings of the 2nd International Workshop on Search and Mining Usergenerated Contents, SMUC, pages 37–44. Carl Edward Rasmussen and Christopher K. I. Williams. 2006. Gaussian Processes for Machine Learning. The MIT Press. Adam Sadilek, Henry Kautz, and Vincent Silenzio. 2012. Modeling Spread of Disease from Social Interactions. In Proc. of 6th International Conference on Weblogs and Social Media, pages 322–329. Peter Schmidt and Robert P Strauss. 1975. The prediction of occupation using multiple logit models. International Economic Review, 16(2):471–86. Jianbo Shi and Jitendra Malik. 2000. Normalized cuts and image segmentation. Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905. Luke Sloan, Jeffrey Morgan, Pete Burnap, and Matthew Williams. 2015. Who tweets? Deriving the demographic characteristics of age, occupation and social class from twitter user meta-data. PloS one, 10(3):e0115545. Edward Snelson and Zoubin Ghahramani. 2006. Sparse gaussian processes using pseudo-inputs. In Advances in Neural Information Processing Systems, NIPS, pages 1257–1264. Andranik Tumasjan, Timm Oliver Sprenger, Philipp G Sandner, and Isabell M Welpe. 2010. Predicting Elections with Twitter: What 140 Characters Reveal about Political Sentiment. In Proc. of 4th International Conference on Weblogs and Social Media, pages 178–185. Vladimir N Vapnik. 1998. Statistical learning theory. Wiley, New York. Ulrike von Luxburg. 2007. A tutorial on spectral clustering. Statistics and computing, 17(4):395–416. Christopher K.I Williams and David Barber. 1998. Bayesian classification with gaussian processes. IEEE Transactions on Pattern Analysis and Machine Intelligence, 20 (12):1342–1351. 1764
2015
169
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 167–176, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Event Extraction via Dynamic Multi-Pooling Convolutional Neural Networks Yubo Chen, Liheng Xu, Kang Liu, Daojian Zeng and Jun Zhao National Laboratory of Pattern Recognition Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China {yubo.chen,lhxu,kliu,djzeng,jzhao}@nlpr.ia.ac.cn Abstract Traditional approaches to the task of ACE event extraction primarily rely on elaborately designed features and complicated natural language processing (NLP) tools. These traditional approaches lack generalization, take a large amount of human effort and are prone to error propagation and data sparsity problems. This paper proposes a novel event-extraction method, which aims to automatically extract lexical-level and sentence-level features without using complicated NLP tools. We introduce a word-representation model to capture meaningful semantic regularities for words and adopt a framework based on a convolutional neural network (CNN) to capture sentence-level clues. However, CNN can only capture the most important information in a sentence and may miss valuable facts when considering multiple-event sentences. We propose a dynamic multi-pooling convolutional neural network (DMCNN), which uses a dynamic multi-pooling layer according to event triggers and arguments, to reserve more crucial information. The experimental results show that our approach significantly outperforms other state-of-the-art methods. 1 Introduction Event extraction is an important and challenging task in Information Extraction (IE), which aims to discover event triggers with specific types and their arguments. Current state-of-the-art methods (Li et al., 2014; Li et al., 2013; Hong et al., 2011; Liao and Grishman, 2010; Ji and Grishman, 2008) often use a set of elaborately designed features that are extracted by textual analysis and linguistic knowledge. In general, we can divide the features into two categories: lexical features and contextual features. Lexical features contain part-of-speech tags (POS), entity information, and morphology features (e.g., token, lemma, etc.), which aim to capture semantics or the background knowledge of words. For example, consider the following sentence with an ambiguous word beats: S1: Obama beats McCain. S2: Tyson beats his opponent . In S1, beats is a trigger of type Elect. However, in S2, beats is a trigger of type Attack, which is more common than type Elect. Because of the ambiguity, a traditional approach may mislabel beats in S1 as a trigger of Attack. However, if we have the priori knowledge that Obama and McCain are presidential contenders, we have ample evidence to predict that beats is a trigger of type Elect. We call these knowledge lexical-level clues. To represent such features, the existing methods (Hong et al., 2011) often rely on human ingenuity, which is a time-consuming process and lacks generalization. Furthermore, traditional lexical features in previous methods are a one-hot representation, which may suffer from the data sparsity problem and may not be able to adequately capture the semantics of the words (Turian et al., 2010). To identify events and arguments more precisely, previous methods often captured contextual features, such as syntactic features, which aim to understand how facts are tied together from a larger field of view. For example, in S3, there are two events that share three arguments as shown in Figure 1. From the dependency relation of nsubj between the argument cameraman and trigger died, we can induce a Victim role to cameraman in the Die event. We call such information sentence-level clues. However, the argument word cameraman and its trigger word fired are in different clauses, and there is no direct de167 In Baghdad , a cameraman died when an American tank fired on the Palestine Hotel. prep_in det nsubj nsubj advcl advmod det amod prep_on det nn Figure 1: Event mentions and syntactic parser results of S3. The upper side shows two event mentions that share three arguments: the Die event mention, triggered by “died”, and the Attack event mention, triggered by “fired”. The lower side shows the collapsed dependency results. pendency path between them. Thus it is difficult to find the Target role between them using traditional dependency features. In addition, extracting such features depends heavily on the performance of pre-existing NLP systems, which could suffer from error propagation. S3: In Baghdad, a cameraman died when an American tank fired on the Palestine Hotel. To correctly attach cameraman to fired as a Target argument, we must exploit internal semantics over the entire sentence such that the Attack event results in Die event. Recent improvements of convolutional neural networks (CNNs) have been proven to be efficient for capturing syntactic and semantics between words within a sentence (Collobert et al., 2011; Kalchbrenner and Blunsom, 2013; Zeng et al., 2014) for NLP tasks. CNNs typically use a max-pooling layer, which applies a max operation over the representation of an entire sentence to capture the most useful information. However, in event extraction, one sentence may contain two or more events, and these events may share the argument with different roles. For example, there are two events in S3, namely, the Die event and Attack event. If we use a traditional max-pooling layer and only keep the most important information to represent the sentence, we may obtain the information that depicts “a cameraman died” but miss the information about “American tank fired on the Palestine Hotel”, which is important for predicting the Attack event and valuable for attaching cameraman to fired as an Target argument. In our experiments, we found that such multiple-event sentences comprise 27.3% of our dataset, which is a phenomenon we cannot ignore. In this paper, we propose a dynamic multipooling convolutional neural network (DMCNN) to address the problems stated above. To capture lexical-level clues and reduce human effort, we introduce a word-representation model (Mikolov et al., 2013b), which has been shown to be able to capture the meaningful semantic regularities of words (Bengio et al., 2003; Erhan et al., 2010; Mikolov et al., 2013a). To capture sentence-level clues without using complicated NLP tools, and to reserve information more comprehensively, we devise a dynamic multi-pooling layer for CNN, which returns the maximum value in each part of the sentence according to event triggers and arguments. In summary, the contributions of this paper are as follows: • We present a novel framework for event extraction, which can automatically induce lexical-level and sentence-level features from plain texts without complicated NLP preprocessing. • We devise a dynamic multi-pooling convolutional neural network (DMCNN), which aims to capture more valuable information within a sentence for event extraction. • We conduct experiments on a widely used ACE2005 event extraction dataset, and the experimental results show that our approach outperforms other state-of-the-art methods. 2 Event Extraction Task In this paper, we focus on the event extraction task defined in Automatic Content Extraction1 (ACE) evaluation, where an event is defined as a specific occurrence involving participants. First, we introduce some ACE terminology to understand this task more easily: 1http://projects.ldc.upenn.edu/ace/ 168 • Event mention: a phrase or sentence within which an event is described, including a trigger and arguments. • Event trigger: the main word that most clearly expresses the occurrence of an event (An ACE event trigger is typically a verb or a noun). • Event argument: an entity mention, temporal expression or value (e.g. Job-Title) that is involved in an event (viz., participants). • Argument role: the relationship between an argument to the event in which it participates. Given an English text document, an event extraction system should predict event triggers with specific subtypes and their arguments for each sentence. The upper side of figure 1 depicts the event triggers and their arguments for S3 in Section 1. ACE defines 8 event types and 33 subtypes, such as Attack or Elect. Although event extraction depends on name identification and entity mention co-reference, it is another difficult task in ACE evaluation and not the focus in the event extraction task. Thus, in this paper, we directly leverage the entity label provided by the ACE, following most previous works (Hong et al., 2011; Liao and Grishman, 2010; Ji and Grishman, 2008). 3 Methodology In this paper, event extraction is formulated as a two-stage, multi-class classification via dynamic multi-pooling convolutional neural networks with the automatically learned features. The first stage is called trigger classification, in which we use a DMCNN to classify each word in a sentence to identify trigger words. If one sentence has triggers, the second stage is conducted, which applies a similar DMCNN to assign arguments to triggers and align the roles of the arguments. We call this argument classification. Because the second stage is more complicated, we first describe the methodology of argument classification in Section 3.1∼3.4 and then illustrate the difference between the DMCNNs that are used for trigger classification and those used for argument classification in Section 3.5. Figure 2 describes the architecture of argument classification, which primarily involves the following four components: (i) word-embedding learning, which reveals the embedding vectors of words in an unsupervised manner; (ii) lexical-level feature representation, which directly uses embedding vectors of words to capture lexical clues; (iii) sentence-level feature extraction, which proposes a DMCNN to learn the compositional semantic features of sentences; and (iv) argument classifier output, which calculates a confidence score for each argument role candidate. 3.1 Word Embedding Learning and Lexical-Level Feature Representation Lexical-level features serve as important clues for event extraction (Hong et al., 2011; Li et al., 2013). Traditional lexical-level features primarily include lemma, synonyms and POS tag of the candidate words. The quality of such features depends strongly on the results of existing NLP tools and human ingenuity. Additionally, the traditional features remain unsatisfactory for capturing the semantics of words, which are important in event extraction, as showed in S1 and S2. As Erhan et al. (2010) reported, word embeddings learned from a significant amount of unlabeled data are more powerful for capturing the meaningful semantic regularities of words. This paper uses unsupervised pre-trained word embedding as the source of base features. We select the word embeddings of candidate words (candidate trigger, candidate argument) and the context tokens (left and right tokens of the candidate words). Then, all of these word embeddings are concatenated into the lexical-level features vector L to represent the lexical-level features in argument classification. In this work, we use the Skip-gram model to pre-train the word embedding. This model is the state-of-the-art model in many NLP tasks (Baroni et al., 2014). The Skip-gram model trains the embeddings of words w1, w2...wm by maximizing the average log probability, 1 m m X t=1 X −c≤j≤c,j̸=0 log p(wt+j|wt) (1) where c is the size of the training window. Basically, p(wt+j|wt) is defined as, p(wt+j|wt) = exp(e ′T t+jet) Pm w=1 exp(e ′T w et) (2) where m is the vocabulary of the unlabeled text. e ′ i is another embedding for ei, see Morin and Bengio (2005) for details. 169 ... a died when an American tank on ... Sentence Feature Input Convolutional Dynamic Multi-pooling Feature map 1 Feature map 2 Feature map 3 11 max(c ) 12 max(c ) 13 max(c ) Embedding Learning Lexical Level Feature Representation Classifier Output CWF PF EF ...... Sentence Level Feature Extraction Figure 2: The architecture for the stage of argument classification in the event extraction. It illustrates the processing of one instance with the predict trigger fired and the candidate argument cameraman. 3.2 Extracting Sentence-Level Features Using a DMCNN The CNN, with max-pooling layers, is a good choice to capture the semantics of long-distance words within a sentence (Collobert et al., 2011). However, as noted in the section 1, traditional CNN is incapable of addressing the event extraction problem. Because a sentence may contain more than one event, using only the most important information to represent a sentence, as in the traditional CNN, will miss valuable clues. To resolve this problem, we propose a DMCNN to extract the sentence-level features. The DMCNN uses a dynamic multi-pooling layer to obtain a maximum value for each part of a sentence, which is split by event triggers and event arguments. Thus, the DMCNN is expected to capture more valuable clues compared to traditional CNN methods. 3.2.1 Input This subsection illustrates the input needed for a DMCNN to extract sentence-level features. The semantic interactions between the predicted trigger words and argument candidates are crucial for argument classification. Therefore, we propose three types of input that the DMCNN uses to capture these important clues: • Context-word feature (CWF): Similar to Kalchbrenner et al. (2014) and Collobert et al. (2011), we take all the words of the whole sentence as the context. CWF is the vector of each word token transformed by looking up word embeddings. • Position feature (PF): It is necessary to specify which words are the predicted trigger or candidate argument in the argument classification. Thus, we proposed the PF, which is defined as the relative distance of the current word to the predicted trigger or candidate argument. For example, in S3, the relative distances of tank to the candidate argument cameraman is 5. To encode the position feature, each distance value is also represented by an embedding vector. Similar to word embedding, Distance Values are randomly initialized and optimized through back propagation. • Event-type feature (EF): The event type of a current trigger is valuable for argument classification (Ahn, 2006; Hong et al., 2011; Liao and Grishman, 2010; Li et al., 2013), so we encode event type predicted in the trigger classification stage as an important clue for the DMCNN, as in the PF. Figure 2 assumes that word embedding has size dw = 4, position embedding has size dp = 1 and event-type embedding has size de = 1. Let xi ∈Rd be the d-dimensional vector representation corresponding to the i-th word in the sentence, where d = dw + dp ∗2 + de. A sentence of length n is represented as follows: x1:n = x1 ⊕x2 ⊕... ⊕xn (3) where ⊕is the concatenation operator. Thus, combined word embedding, position embedding and event-type embedding transform an instance as a matrix X ∈Rn×d. Then, X is fed into the convolution part. 170 3.2.2 Convolution The convolution layer aims to capture the compositional semantics of a entire sentence and compress these valuable semantics into feature maps. In general, let xi:i+j refer to the concatenation of words xi, xi+1, ..., xi+j. A convolution operation involves a filter w ∈Rh×d, which is applied to a window of h words to produce a new feature. For example, a feature ci is generated from a window of words xi:i+h−1 by the following operator, ci = f(w · xi:i+h−1 + b) (4) where b ∈R is a bias term and f is a non-linear function such as the hyperbolic tangent. This filter is applied to each possible window of words in the sentence x1:h, x2:h+1, ..., xn−h+1:n to produce a feature map ci where the index i ranges from 1 to n −h + 1. We have described the process of how one feature map is extracted from one filter. To capture different features, it usually use multiple filters in the convolution. Assuming that we use m filters W = w1, w2, ..., wm, the convolution operation can be expressed as: cji = f(wj · xi:i+h−1 + bj) (5) where j ranges from 1 to m. The convolution result is a matrix C ∈Rm×(n−h+1). 3.2.3 Dynamic Multi-Pooling To extract the most important features (max value) within each feature map, traditional CNNs (Collobert et al., 2011; Kim, 2014; Zeng et al., 2014) take one feature map as a pool and only get one max value for each feature map. However, single max-pooling is not sufficient for event extraction. Because in the task of this paper, one sentence may contain two or more events, and one argument candidate may play a different role with a different trigger. To make an accurate prediction, it is necessary to capture the most valuable information with regard to the change of the candidate words. Thus, we split each feature map into three parts according to the candidate argument and predicted trigger in the argument classification stage. Instead of using one max value for an entire feature map to represent the sentence, we keep the max value of each split part and call it dynamic multi-pooling. Compared to traditional max-pooling, dynamic multi-pooling can reserve more valuable information without missing the max-pooling value. As shown in Figure 2, the feature map output cj is divided into three sections cj1, cj2, cj3 by “cameraman” and “fired”. The dynamic multi-pooling can be expressed as formula 6,where 1 ≤j ≤m and 1 ≤i ≤3. pji = max(cji) (6) Through the dynamic multi-pooling layer, we obtain the pji for each feature map. Then, we concatenate all pji to form a vector P ∈R3m, which can be considered as higher-level features (sentence-level features). 3.3 Output The automatically learned lexical and sentencelevel features mentioned above are concatenated into a single vector F = [L, P]. To compute the confidence of each argument role, the feature vector F ∈R3m+dl, where m is the number of the feature map and dl is the dimension of the lexicallevel features, is fed into a classifier. O = WsF + bs (7) Ws ∈Rn1×(3m+dl) is the transformation matrix and O ∈Rn1 is the final output of the network, where n1 is equal to the number of the argument role including the “None role” label for the candidate argument which don’t play any role in the event. For regularization, we also employ dropout(Hinton et al., 2012) on the penultimate layer, which can prevent the co-adaptation of hidden units by randomly dropping out a proportion p of the hidden units during forward and backpropagation. 3.4 Training We define all of the parameters for the stage of argument classification to be trained as θ = (E, PF1, PF2, EF, W, b, WS, bs). Specifically, E is the word embedding, PF1 and PF2 are the position embedding, EF is the embedding of the event type, W and b are the parameter of the filter, Ws and bs are all of the parameters of the output layer. Given an input example s, the network with parameter θ outputs the vector O, where the i-th component Oi contains the score for argument role i. To obtain the conditional probability p(i|x, θ), we apply a softmax operation over all argument 171 role types: p(i|x, θ) = eoi n1 P k=1 eok (8) Given all of our (suppose T) training examples (xi; yi), we can then define the objective function as follows: J (θ) = T X i=1 log p(y(i)|x(i), θ) (9) To compute the network parameter θ, we maximize the log likelihood J (θ) through stochastic gradient descent over shuffled mini-batches with the Adadelta (Zeiler, 2012) update rule. 3.5 Model for Trigger Classification In the above sections, we presented our model and features for argument classification. The method proposed above is also suitable for trigger classification, but the task only need to find triggers in the sentence, which is less complicated than argument classification. Thus we can used a simplified version of DMCNN. In the trigger classification, we only use the candidate trigger and its left and right tokens in the lexical-level feature representation. In the feature representation of the sentence level, we use the same CWF as does in argument classification, but we only use the position of the candidate trigger to embed the position feature. Furthermore, instead of splitting the sentence into three parts, the sentence is split into two parts by a candidate trigger. Except for the above change in the features and model, we classify a trigger as the classification of an argument. Both stages form the framework of the event extraction. 4 Experiments 4.1 Dataset and Evaluation Metric We utilized the ACE 2005 corpus as our dataset. For comparison, as the same as Li et al. (2013), Hong et al. (2011) and Liao and Grishman (2010), we used the same test set with 40 newswire articles and the same development set with 30 other documents randomly selected from different genres and the rest 529 documents are used for training. Similar to previous work (Li et al., 2013; Hong et al., 2011; Liao and Grishman, 2010; Ji and Grishman, 2008), we use the following criteria to judge the correctness of each predicted event mention: • A trigger is correct if its event subtype and offsets match those of a reference trigger. • An argument is correctly identified if its event subtype and offsets match those of any of the reference argument mentions. • An argument is correctly classified if its event subtype, offsets and argument role match those of any of the reference argument mentions. Finally we use Precision (P), Recall (R) and F measure (F1) as the evaluation metrics. 4.2 Our Method vs. State-of-the-art Methods We select the following state-of-the-art methods for comparison. 1) Li’s baseline is the feature-based system proposed by Li et al. (2013), which only employs human-designed lexical features, basic features and syntactic features. 2) Liao’s cross-event is the method proposed by Liao and Grishman (2010), which uses documentlevel information to improve the performance of ACE event extraction. 3) Hong’s cross-entity is the method proposed by Hong et al. (2011), which extracts event by using cross-entity inference. To the best of our knowledge, it is the best-reported feature-based system in the literature based on gold standards argument candidates. 4) Li’s structure is the method proposed by Li et al. (2013), which extracts events based on structure prediction. It is the best-reported structurebased system. Following Li et al. (2013), we tuned the model parameters on the development through grid search. Moreover, in different stages of event extraction, we adopted different parameters in the DMCNN. Specifically, in the trigger classification, we set the window size as 3, the number of the feature map as 200, the batch size as 170 and the dimension of the PF as 5. In the argument classification, we set the window size as 3, the number of the feature map as 300, the batch size as 20 and the dimension of the PF and EF as 5. Stochastic gradient descent over shuffled mini-batches with the Adadelta update rule(Zeiler, 2012) is used for training and testing processes. It mainly contains two parameters p and ε. We set p = 0.95 and ε = 1e−6. For the dropout operation, we set the 172 Methods Trigger Identification(%) Trigger Identification + Classification(%) Argument Identification(%) Argument Role(%) P R F P R F P R F P R F Li’s baseline 76.2 60.5 67.4 74.5 59.1 65.9 74.1 37.4 49.7 65.4 33.1 43.9 Liao’s cross-event N/A 68.7 68.9 68.8 50.9 49.7 50.3 45.1 44.1 44.6 Hong’s cross-entity N/A 72.9 64.3 68.3 53.4 52.9 53.1 51.6 45.5 48.3 Li’s structure 76.9 65.0 70.4 73.7 62.3 67.5 69.8 47.9 56.8 64.7 44.4 52.7 DMCNN model 80.4 67.7 73.5 75.6 63.6 69.1 68.8 51.9 59.1 62.2 46.9 53.5 Table 1: Overall performance on blind test data rate = 0.5. We train the word embedding using the Skip-gram algorithm 2 on the NYT corpus 3. Table 1 shows the overall performance on the blind test dataset. From the results, we can see that the DMCNN model we proposed with the automatically learned features achieves the best performance among all of the compared methods. DMCNN can improve the best F1 (Li et al., 2013) in the state-of-the-arts for trigger classification by 1.6% and argument role classification by 0.8%. This demonstrates the effectiveness of the proposed method. Moreover, a comparison of Liao’s cross-event with Li’s baseline illustrates that Liao’s cross-event achieves a better performance. We can also make the same observation when comparing Hong’s cross-entity with Liao’s cross-event and comparing Li’s structure with Hong’s cross-entity. It proves that richer feature sets lead to better performance when using traditional human-designed features. However, our method could obtain further better results on the condition of only using automatically learned features from original words. Specifically, compared to Hong’s cross-entity, it gains 0.8% improvement on trigger classification F1 and 5.2% improvement on argument classification F1. We believe the reason is that the features we automatically learned can capture more meaningful semantic regularities of words. Remarkably, compared to Li’s structure, our approach with sentence and lexical features achieves comparable performance even though we do not use complicated NLP tools. 4.3 Effect of The DMCNN on Extracting Sentence-Level Features In this subsection, we prove the effectiveness of the proposed DMCNN for sentence-level feature extraction. We specifically select two methods as baselines for comparison with our DMCNN: Embeddings+T and CNN. Embeddings+T uses word 2https://code.google.com/p/word2vec/ 3https://catalog.ldc.upenn.edu/LDC2008T19 embeddings as lexical-level features and traditional sentence-level features based on human design (Li et al., 2013). A CNN is similar to a DMCNN, except that it uses a standard convolutional neural network with max-pooling to capture sentence-level features. By contrast, a DMCNN uses the dynamic multi-pooling layer in the network instead of the max-pooling layer in a CNN. Moreover, to prove that a DMCNN could capture more precise sentence-level features, especially for those sentences with multiple events, we divide the testing data into two parts according the event number in a sentence (single event and multiple events) and perform evaluations separately. Table 2 shows the proportion of sentences with multiple events or a single event and the proportion of arguments that attend one event or more events within one sentence in our dataset. Table 3 shows the results. Stage 1/1 (%) 1/N (%) Trigger 72.7 27.3 Argument 76.8 23.2 Table 2: The proportion of multiple events within one sentence. 1/1 means that one sentence only has one trigger or one argument plays a role in one sentence; otherwise, 1/N is used. Table 3 illustrates that the methods based on convolutional neural networks (CNN and DMCNN) outperform Embeddings+T. It proves that convolutional neural networks could be more effective than traditional human-design strategies for sentence-level feature extraction. In table 3, for all sentences, our method achieves improvements of approximately 2.8% and 4.6% over the CNN. The results prove the effectiveness of the dynamic multi-pooling layer. Interestingly, the DMCNN yields a 7.8% improvement for trigger classification on the sentences with multiple events. This improvement is larger than in sentences with a single event. Similar observations can be made for 173 the argument classification results. This demonstrates that the proposed DMCNN can effectively capture more valuable clues than the CNN with max-pooling, especially when one sentence contains more than one event. Stage Method 1/1 1/N all F1 F1 F1 Trigger Embedding+T 68.1 25.5 59.8 CNN 72.5 43.1 66.3 DMCNN 74.3 50.9 69.1 Argument Embedding+T 37.4 15.5 32.6 CNN 51.6 36.6 48.9 DMCNN 54.6 48.7 53.5 Table 3: Comparison of the event extraction scores obtained for the Traditional, CNN and DMCNN models 4.4 Effect of Word Embedding on Extracting Lexical-Level Features This subsection studies the effectiveness of our word embedding for lexical features. For comparison purposes, we select the baseline described by Li et al. (2013) as the traditional method, which uses traditional lexical features, such as n-grams, POS tags and some entity information. In contrast, we only use word embedding as our lexical feature. Moreover, to prove that word embedding could capture more valuable semantics, especially for those words in the test data that never appear to be the same event type or argument role in the training data, we divide the triggers and arguments in the testing data into two parts (1: appearing in testing data only, or 2: appearing in both testing and training data with the same event type or argument role) and perform evaluations separately. For triggers, 34.9% of the trigger words in the test data never appear to be the same event type in the training data. This proportion is 83.1% for arguments. The experimental results are shown in Table 4. Table 4 illustrates that for all situations, our method makes significant improvements compared with the traditional lexical features in the classification of both the trigger and argument. For situation B, the lexical-level features extracted from word embedding yield a 18.8% improvement for trigger classification and an 8.5% improvement for argument classification. This occurs because the baseline only uses discrete features, so they suffer from data sparsity and could not adequately handle a situation in which a trigger or argument does not appear in the training data. Stage Method A B All F1 F1 F1 Trigger Traditional 68.8 14.3 61.2 Ours 70.7 33.1 64.9 Argument Traditional 58.5 22.2 34.6 Ours 59.5 30.7 40.2 Table 4: Comparison of the results for the traditional lexical feature and our lexical feature. A denotes the triggers or arguments appearing in both training and test datasets, and B indicates all other cases. 4.5 Lexical features vs. Sentence Features To compare the effectiveness of different levels of features, we extract events by using lexical features and sentence features separately. The results obtained using the DMCNN are shown in table 5. Interestingly, in the trigger-classification stage, the lexical features play an effective role, whereas the sentence features play a more important role in the argument-classification stage. The best results are achieved when we combine lexical-level and sentence-level features. This observation demonstrates that both of the two-level features are important for event extraction. Feature Trigger Argument F1 F1 Lexical 64.9 40.2 Sentence 63.8 50.7 Combine 69.1 53.5 Table 5: Comparison of the trigger-classification score and argument-classification score obtained by lexical-level features, sentence-level features and a combination of both 5 Related Work Event extraction is one of important topics in NLP. Many approaches have been explored for event extraction. Nearly all of the ACE event extraction use supervised paradigm. We further divide supervised approaches into feature-based methods and structure-based methods. In feature-based methods, a diverse set of strategies has been exploited to convert classification clues (such as sequences and parse trees) into feature vectors. Ahn (2006) uses the lexical features(e.g., full word, pos tag), syntactic features (e.g., dependency features) and externalknowledge features(WordNet) to extract the event. Inspired by the hypothesis of “One Sense Per Dis174 course”(Yarowsky, 1995), Ji and Grishman (2008) combined global evidence from related documents with local decisions for the event extraction. To capture more clues from the texts, Gupta and Ji (2009), Liao and Grishman (2010) and Hong et al. (2011) proposed the cross-event and cross-entity inference for the ACE event task. Although these approaches achieve high performance, featurebased methods suffer from the problem of selecting a suitable feature set when converting the classification clues into feature vectors. In structure-based methods, researchers treat event extraction as the task of predicting the structure of the event in a sentence. McClosky et al. (2011) casted the problem of biomedical event extraction as a dependency parsing problem. Li et al. (2013) presented a joint framework for ACE event extraction based on structured perceptron with beam search. To use more information from the sentence, Li et al. (2014) proposed to extract entity mentions, relations and events in ACE task based on the unified structure. These methods yield relatively high performance. However, the performance of these methods depend strongly on the quality of the designed features and endure the errors in the existing NLP tools. 6 Conclusion This paper proposes a novel event extraction method, which can automatically extract lexicallevel and sentence-level features from plain texts without complicated NLP preprocessing. A wordrepresentation model is introduced to capture lexical semantic clues and a dynamic multi-pooling convolutional neural network (DMCNN) is devised to encode sentence semantic clues. The experimental results prove the effectiveness of the proposed method. Acknowledgments This work was supported by the National Basic Research Program of China (No. 2014CB340503) and the National Natural Science Foundation of China (No. 61272332 and No. 61202329) References David Ahn. 2006. The stages of event extraction. In Proceedings of ACL, pages 1–8. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Dont count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of ACL, pages 238–247. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. The Journal of Machine Learning Research, 3:1137–1155. Chen Chen and V Incent NG. 2012. Joint modeling for chinese event extraction with rich linguistic features. In Proceedings of COLING, pages 529–544. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Dumitru Erhan, Yoshua Bengio, Aaron Courville, Pierre-Antoine Manzagol, Pascal Vincent, and Samy Bengio. 2010. Why does unsupervised pre-training help deep learning? The Journal of Machine Learning Research, 11:625–660. Prashant Gupta and Heng Ji. 2009. Predicting unknown time arguments based on cross-event propagation. In Proceedings of ACL-IJCNLP, pages 369– 372. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Yu Hong, Jianfeng Zhang, Bin Ma, Jianmin Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In Proceedings of ACL-HLT, pages 1127–1136. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In Proceedings of ACL, pages 254–262. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent convolutional neural networks for discourse compositionality. arXiv preprint arXiv:1306.3584. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. arXiv preprint arXiv:1404.2188. Yoon Kim. 2014. Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882. Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of AAAI. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In Proceedings of ACL, pages 73–82. 175 Qi Li, Heng Ji, Yu Hong, and Sujian Li. 2014. Constructing information networks using one single model. In Proceedings of EMNLP, pages 1846– 1851. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In Proceedings of ACL, pages 789–797. David McClosky, Mihai Surdeanu, and Christopher D Manning. 2011. Event extraction as dependency parsing. In Proceedings of ACL-HLT, pages 1626– 1635. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. Frederic Morin and Yoshua Bengio. 2005. Hierarchical probabilistic neural network language model. In Proceedings of AISTATS, pages 246–252. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proceedings of ACL, pages 384–394. David Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proceedings of ACL, pages 189–196. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of COLING, pages 2335–2344. 176
2015
17
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1765–1773, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Tracking unbounded Topic Streams Dominik Wurzer School of Informatics University of Edinburgh d.s.wurzer @sms.ed.ac.uk Victor Lavrenko School of Informatics University of Edinburgh vlavrenk @inf.ed.ac.uk Miles Osborne Bloomberg London mosborne29 @bloomberg.net Abstract Tracking topics on social media streams is non-trivial as the number of topics mentioned grows without bound. This complexity is compounded when we want to track such topics against other fast moving streams. We go beyond traditional small scale topic tracking and consider a stream of topics against another document stream. We introduce two tracking approaches which are fully applicable to true streaming environments. When tracking 4.4 million topics against 52 million documents in constant time and space, we demonstrate that counter to expectations, simple single-pass clustering can outperform locality sensitive hashing for nearest neighbour search on streams. 1 Introduction The emergence of massive social media streams has sparked a growing need for systems able to process them. While previous research (Hassan et al., 2009; Becker et al., 2009; Petrovic et al., 2010; Cataldi et al., (2010); Weng et al., (2011); Petrovic 2013) has focused on detecting new topics in unbounded textual streams, less attention was paid to following (tracking) the steadily growing set of topics. Standard topic tracking (Allan, 2002) deals with helping human analysts follow and monitor ongoing events on massive data streams. By pairing topics with relevant documents, topic tracking splits a noisy stream of documents into sub-streams grouped by their target topics. This is a crucial task for financial and security analysts who are interested in pulling together relevant information from unstructured and noisy data streams. Other fields like summarization or topic modeling benefit from topic tracking as a mean to generate their data sources. In todays data streams however, new topics emerge on a continual basis and we are interested in following all instead of just a small fraction of newly detected topics. Since its introduction (Allan, 2002), standard topic tracking typically operates on a small scale and against a static set of predefined target topics. We go beyond such approaches and deal for the first time with massive, unbounded topic streams. Examples of unbounded topic streams include all events reported by news agencies each day across the world; popular examples of unbounded document streams include social media services such as Twitter. Tracking streams of topics allows research tasks like topic-modeling or summarization to be applied to millions of topics, a scale that is several orders of magnitude larger than those of current publications. We present two massive scale topic tracking systems capable of tracking unbounded topic streams. One is based on locality sensitive hashing (LSH) and the other on clustering. Since we operate on two unbounded data sources we are subject to the streaming model of computation (Muthukrishnan, 2005), which requires instant and single-pass decision making in constant time and space. Contrary to expectations, we find that nearest neighbour search on a stream based on clustering performs faster than LSH for the same level of accuracy. This is surprising as LSH is widely believed to be the fastest way of nearest neighbour search. Our experiments reveal how simple single-pass clustering outperforms LSH in terms of effectiveness and efficiency. Our results are general and apply to any setting where we have massive or infinite numbers of topics, matched against unboundedly large document streams. 1765 Contributions • For the first time we show how it is possible to track an unbounded stream of topics in constant time and space, while maintaining a level of effectiveness that is statistically indistinguishable from an exact tracking system • We show how single-pass clustering can outperform locality sensitive hashing in terms of effectiveness and efficiency for identifying nearest neighbours in a stream • We demonstrate that standard measures of similarity are sub-optimal when matching short documents against long documents 2 Related Work Topic or event tracking was first introduced in the Topic Detection and Tracking (TDT) program (Allan, 2002). In TDT, topic tracking involves monitoring a stream of news documents with the intent to identify those documents relevant to a small predefined set of target topics. During the course of TDT, research focused extensively on the effectiveness of tracking systems, neglecting scale and efficiency. The three official data sets only range from 25k to 74k documents with a few hundred topics (Allan, 2002). More recently, the rise of publicly available real-time social media streams triggered new research on topic detection and tracking, intended to apply the technology to those high volume document streams. The novel data streams differ from the TDT data sets in their volume and level of noise. To provide realtime applications, traditional methods need to be overhauled to keep computation feasible. It became common practice to limit data sets to cope with the computational effort. Popular strategies involve reducing the number of tracked topics (Lin et al., 2011; Nichols et al., 2012;) as well as sampling the document stream (Ghosh et al., 2013). These approaches have proven to be efficient in cutting down workload but they also limit an application’s performance. Furthermore, Sayyadi et al. (2009) discovered and tracked topics in social streams based on keyword graphs. They applied the sliding window principle to keep the computation feasible, although their data set only contained 18k documents. Yang et al. 2012 tracked topics in tweet streams using language models. To cope with the computational effort they assume a small set of topics of only a few dozen, which are defined in advance. Tang et al. (2011) tracked a single topic on a few thousand blogs based on semantic graph topic models. Pon et al. (2007) recommend news by tracking multiple topics for a user but their data sets only span several thousand documents and a few topics. Further related work includes the real-time filtering task, introduced as part of TREC’s Microblog Track in 2012 (Soboroff et al., 2012). Hong et al. (2013) explore topic tracking in tweet streams in relation to the TREC real-time filtering task by relying on a sliding window principle, while focusing on the cold start problem. 3 Topic Tracking 3.1 Traditional Approach Numerous approaches to topic tracking have emerged, spanning from probabilistic retrieval to statistical classification frameworks. While there is no single general approach, we define the traditional approach to tracking from a high-level perspective covering the basic principle of all previous approaches. We do not make any assumptions about the kind of topics, documents or distance functions used. As defined by TDT (Allan, 2002), we assume, we operate on an unbounded document stream with the goal of tracking a fixed set of target topics. Although topics are allowed to drift conceptually and evolve over time, new topics would always trigger the start of a new tracking system. Algorithm 1 Traditional Tracking INPUT: TOPIC-SET {t ϵ T} DOCUMENT-STREAM {d ϵ D} OUTPUT: relevant topic-document pairs {t, d} while documents d in stream D do for all topics t in set T do similarity = computeSimilarity(d,t) if similarity > threshold then emit relevant {t, d} As seen in Algorithm 1, documents arrive one at a time, requiring instant decision making through single pass processing. Each document is compared to all topics representations to identify the closest topic. The tracking decision is based on the similarity to the closest topic and usually defined by a thresholding strategy. Because incoming documents can be relevant to more than one topic, we 1766 need to match it against all of them. Due to its simplicity, the traditional tracking approach is highly efficient when applied to a fairly low number of topics. 3.2 Shortcomings of the traditional approach The traditional approach - though low in computational effort - becomes challenging when scaling up the number of target topics. The computational effort arises from the number of comparisons made (the number of documents times topics). That explains, why researches following the traditional approach have either lowered the number of documents or topics. Heuristics and indexing methods increase the performance but offer no solution scalable to true streaming environments because they only allow for one-side scaling (either a large number of documents or topics). Increasing either of the two components by a single document, increases the computational effort by the magnitude of the other one. For the extreme case of pushing to an infinite number of topics, tracking in constant space is a necessity. 4 Tracking at scale Before directly turning to a full streaming set up in constant space, we approach tracking a topic stream on a document stream in unbounded space. The key to scale up documents and topics, lies in reducing the number of necessary comparisons. Throughout the remainder of this paper we represent documents and topics arriving from a steady high volume stream by term-weighted vectors in the vector space. In order to cut down the search space, we encapsulate every topic vector by a hypothetical region marking its area of proximity. Those regions are intend to capture documents that are more likely to be relevant. Ideally, these regions form a hypersphere centred around every topic vector with a radius equal to the maximum distance to relevant documents. The tracking procedure is then reduced to determining whether an incoming document is also enclosed by any of the hyperspheres. 4.1 Approximated Tracking Our first attempt to reach sub-linear execution time uses random segmentation of the vector space using hashing techniques. We frame the tracking process as a nearest neighbour search problem, as defined by Gionis et al. (1999). Documents arriving from a stream are seen as queries and the closest topics are the nearest neighbours to be identified. We explore locality sensitive hashing (LSH), as described by Indyk et al. (1998), to approach high dimensional nearest neighbour search for topic tracking in sub-linear time. LSH, which has been used to speed up NLP applications (Ravichandran et al., 2005), provides hash functions that guarantee that similar documents are more likely to be hashed to the same binary hash key than distant ones. Hash functions capture similarities between vectors in high dimensions and represent them on a low dimensional binary level. We apply the scheme by Charikar (2002), which describes the probabilistic bounds for the cosine similarity between two vectors. Each bit in a hash key represents a documents position with respect to a randomly placed hyperplane. Those planes segment the vector space, forming high dimensional polygon shaped buckets. Documents and topics are placed into a bucket by determining on which side of each the hyperplanes they are positioned. We interpret these buckets as regions of proximity as the collision probability is directly proportional to the cosine similarity between two vectors. Algorithm 2 LSH-based Tracking INPUT: TOPIC-STREAM {T} DOCUMENT-STREAM {D} OUTPUT: relevant topic-document pairs {t, d} while document d in T, D do if d ϵ T then hashKeys = hashLSH(d) store hashKeys in hashTables else if d ϵ D then candidateSet = lookupHashtables(hashLSH(d)) for all topics t in candidateSet do if similarity(d,t) > threshold then emit relevant {t, d} Algorithm 2 outlines the pseudo code to LSHbased tracking. Whenever a topic arrives, it is hashed, placing it into a bucket. To increase collision probability with similar documents, we repeat the hashing process with different hash functions, storing a topic and hash-key tuple in a hash table. On each arrival of a new document the same hash functions are applied and the key is matched against the hash tables, yielding a set of candidate topics. The probabilistic bounds of the hashing scheme guarantee that topics in the candidate set 1767 are on average more likely to be similar to the document than others. We then match each topic in the candidate set against the document to lower the false positive rate of LSH (Gionis, et al., 1999). The number of exact comparisons necessary is reduced to the number of topics in the candidate set. 4.2 Cluster based Tracking LSH based tracking segments the vector-space randomly without consideration of the data’s distribution. In contrast, we now propose a data dependent approach through document clustering. The main motivation for data dependent space segmentation is increased effectiveness resulting from taking the topic distribution within the vector space into account when forming the regions of proximity. We construct these regions by grouping similar topics to form clusters represented by a centroid. When tracking a document, it is matched against the centroids instead of all topics, yielding a set of candidate topics. This allows reducing the number of comparisons necessary to only the number of centroids plus the number of topics captured by the closest cluster. Algorithm 3 Cluster based Tracking INPUT: INITIAL-CLUSTER-SET {c ϵ C} TOPIC-STREAM {T} DOCUMENT-STREAM {D} threshold for spawning a new cluster {thrspawn} threshold for adapting an existing cluster {thradapt} OUTPUT: relevant topic-document pairs {t, d} while document d in T, D do if d ϵ T then cmin = argminc{distances(d, c ϵ C)} if distance(d,cmin) > thrspawn then spawnNewCluster(d →C) else if distance(d,cmin) < thradapt then contribute,assign(cmin,d) else assign(cmin,d) else if d ϵ D then cmin = argminc{distances(d,c ϵ C)} candidateSet = {t ϵ cmin} for all topics t in candidateSet do if similarity(d,t) > threshold then emit relevant {t, d} While the literature provides a vast diversity of clustering methods for textual documents, our requirements regarding tracking streams of topics naturally reduce the selection to lightweight single-pass algorithms. Yang et al. (2012) provided evidence that in extreme settings simple approaches work well in terms of balancing effectiveness, efficiency and scalability. We identified ArteCM by Carullo et al. (2008), originally intended to cluster documents for the web, as suitable. Algorithm 3 outlines our approach for cluster based tracking. Given an initial set of 4 random centroids, we compare each arriving topic to all centroids. We associate the new topic with the cluster whenever it is close enough. Particularly close documents contribute to a cluster, allowing it to drift towards topic dense regions. If the document is distant to all existing clusters, we spawn a new cluster based on the document. Documents arriving from the document stream are exactly matched against all centroids to determine the k-closest clusters. Topics associated with those clusters are subsequently exhaustively compared with the document, yielding topic-document pairs considered to be relevant. Probing more than one cluster increases the probability of finding similar topics. This does not correlate with soft-clustering methods as multiple probing happens at querying time while topics are assigned under a hard clustering paradigm. 4.3 Algorithm Comparison Both the LSH- and the cluster-based tracking algorithm provide two parameters that are conceptually directly comparable to each other. The number of bits per hash key and the threshold for spawning new clusters directly determine the size of the candidate set by either varying the bucket size or the cluster radius. The size of the candidate set trades a gain in efficiency against a loss in effectiveness. Fewer topics in the candidate set heavily reduce the search space for the tracking process but increase the chance of missing a relevant topic. Bigger sets are more likely to cover relevant topics but require more computational effort during the exact comparison step. The proposed algorithms allow continuously adjusting the candidate set size between two extremes of having all topics in a single set and having a separate set for each topic. The second parameter both algorithms have in common, is the number of probes to increase the probability of identifying similar topics. While LSH-based tracking offers the number of hash tables, cluster-based tracking provides the number of clusters probed. We again encounter a trade-off between gains in efficiency at the cost of effective1768 ness. Each additionally probed cluster or looked up table increases the chance of finding relevant topics as well as the computational effort. 5 Tracking Streams in Constant Space Operation in constant space is crucial when tracking topic streams. We ensure this by placing an upper limit on the number of concurrently tracked topics. Whenever the limit is reached, an active topic is deleted and subsequently not considered any longer. The strategy for selecting deletion candidates is heavily application dependant. To handle topic streams, LSH-based tracking replaces the entries of an active topic in its hash-tables by the values of the new topic, whenever the maximum number of topics is reached. Cluster-based tracking requires more adaptation because we allow clusters to drift conceptually. Whenever the maximum number of topics is reached, the contribution of the deletion candidate to its cluster is reverted and it is removed, freeing space for a new topic. 6 Experiments We evaluate the three algorithms in terms of effectiveness and efficiency. Starting out with tracking a small set of topics using the traditional approach, we evaluate various similarity metrics to ensure high effectiveness. We then conduct scaling experiments on massive streams in bounded and unbounded space. Corpora Traditional tracking datasets are unsuitable to approach tracking at scale as they consist of only a few thousand documents and several hundred topics (Allan, 2002). We created a new data set consisting of two streams (document and topic stream). The document stream consists of 52 million tweets gathered through Twitter’s streaming API 1. The tweets are order by their time-stamps. Since we are advocating a high volume topic stream, we require millions of topics. To ensure a high number of topics, we treat the entire English part (4.4 mio articles) of Wikipedia2 as a proxy for a collection of topics and turn it into a stream. Each article is considered to be an unstructured textual representation of a topic time-stamped by its latest verified update. 1http://stream.twitter.com 2http://en.wikipedia.org/wiki/Wikipedia database Relevance Judgements The topics we picked range from natural disasters, political and financial events to news about celebrities, as seen in table 3. We adopted the search-guided-annotation process used by NIST (Fiscus et al., 2002) and followed NIST’s TDT annotation guidelines. According to the definition of TDT, a document is relevant to a topic if it speaks about it (Allan, 2002). In total we identified 14,436 tweets as relevant to one of 30 topics. total number of topics 4.4 mio annotated topics 30 total number of documents 52 mio documents relevant to one of the 30 annotated topics 14.5k Table 1: Data set statistics Baseline We use an exact tracking system as a baseline. To speed up runtime, we implement an inverted index in conjunction with term-at-a-time query execution. Additionally, we provide a trade off between effectiveness and efficiency by randomly down sampling the Twitter stream. Note that this closely resembles previous approaches to scale topic tracking (Ghosh et al., 2013). Evaluation Metrics We evaluate effectiveness by recall and precision and combine them using F1 scores. Efficiency is evaluated using two different metrics. We provide a theoretical upper bound by computing the number of dot products required for tracking (Equations 1-4). DPtraditional = nD ∗nT (1) DPLSH−based = (nD +nT )∗(k∗L)+DPcs (2) DPcluster−based = (nD + nT ) ∗c + DPcs (3) DPcs = nD ∗nC (4) Variables Definition nD total number of documents nT total number of topics k number of bits per hash L total number of hash tables c total number of clusters nC total number of topics in all candidate sets Table 2: Definition of variables for equation 1-4 1769 Topic-Title Topic description Number of relevant tweets Amy Winehouse Amy Winehouse dies 3265 Prince William William and Kate arrive in Canada 1021 Floods in Seoul Floods and landslides in North and South Korea 432 Flight 4896 Flight 4896 crashed 11 Bangladesh-India border Bangladesh and India sign a border pact 4 Goran Hadzic War criminal Goran Hadzic got arrested 2 Table 3: Showing 6 example topics plus a short summary of relevant tweets, as well as the number of relevant tweets per topic They therefore indicate performance without system- or implementation-dependent distortions. Equations 2 and 3 represent the cost to identify the candidate set for the LSH- and cluster-based algorithm plus the cost resulting from exhaustively comparing the candidate sets with the documents (Equation 4). Because we compute the dot products for a worst case scenario, we also provide the runtime in seconds. All run-times are averaged over 5 runs, measured on the same idle machine. To ensure fair comparison, all algorithms are implemented in C using the same libraries, compiler, compiler optimizations and run as a single process using 4 GB of memory. Because the runtime of the traditional approach (∼171 days) exceeds our limits, we estimate it based on extrapolating 50 runs using up to 25,000 topics. Note that this extrapolation favours the efficiency of the baseline system as it ignores hardware dependent slowdowns when scaling up the number of topics. 6.1 Exact tracking In our first experiment we track 30 annotated topics on 52 million tweets using the traditional approach. We compare various similarity measures (Table 4) and use the best-performing one in all following experiments. Our data set differs from the TREC and TDT corpora, which used news-wire articles. Allan et al. (2000) report that the cosine similarity constantly performed as the best distance function for TDT. The use of Wikipedia and Twitter causes a different set of similarity measures to perform best. This results from the imbalance in average document length between Wikipedia articles (590 terms) and tweets (11 terms). The term weights in short tweets (many only containing a single term) are inflated by the cosine’s length normalization. Those short tweets are however not uniquely linkable to target topics and consequently regarded as non-relevant by annotators, which explains the drop in performance. The similarity function chosen for all subsequent experiments is a BM25 weighted dot product, which we found to perform best. F1 score tf-idf weighted cosine 0.147 tf-idf weighted dot product 0.149 BM25 weighted cosine 0.208 BM25 weighted dot product 0.217 Table 4: Comparing the effectiveness of similarity measures when matching 30 Wikipedia articles against 52 million tweets 6.2 Tracking at scale, using Wikipedia and Twitter Previously, we conducted small scale experiments, now we are looking to scale them up, by tracking 4.4 million Wikipedia articles on 52 million tweets without limiting the number of topics tracked. The resulting trade-off between effectiveness and efficiency is shown in Figure 1 and 2. The right-most point corresponds to exhaustive comparison of every document against every topic – this results in highest possible effectiveness (F1 score) and highest computational cost. All runs use optimal tracking thresholds determined by sweeping them while optimizing on F1 score as an objective function. We also show the performance resulting from the traditional approach when randomly down-sampling the document (Twitter) stream, which resembles previous attempts to scale tracking (Ghosh et al., 2013). Every point on the LSH-based tracking curve in Figure 1 and 2 represents a different number of bits per hash key (varying between 4 and 20) and tables (ranging from 6 to 200). The points on the cluster-based tracking curves result from varying the number of clusters (ranging from 1 to 100,000) and probes. The resulting bucket sizes span from a few dozen to over a million topics. As expected, the graphs in Figure 1 closely resembles those in Figure 2. The two figures also show that the performance of all three algorithms is continuously adjustable. Unsurprisingly, LSHand cluster-based tracking clearly outperform 1770 Figure 1: Trade-off between efficiency and dot-products for LSH- and cluster-based tracking as well as a random downsampling approach for traditional tracking Figure 2: Trade-off between efficiency and runtime for LSH- and cluster-based tracking as well as a random downsampling approach for traditional tracking; random document sampling for the traditional approach, based on their more effective search space reduction strategies. More surprisingly, we also observe that cluster-based tracking outperforms tracking based on LSH in terms of efficiency for F1 scores between 10% and 20%. To understand why tracking based on clustering is faster than randomized tracking, we further investigate their abilities in cutting down the search space. Figure 3 presents the candidate set size necessary to find a certain ratio of relevant topics. The graph also illustrates the impact of probing multiple clusters. When focusing on a recall up to 60%, LSH-based tracking requires a significantly larger candidate set size in comparison with tracking through clustering. For example, LSH-based tracking needs to examine 30% of all topics to reach a recall of 50%, while the cluster based approach only needs to look at 9%. This effect diminishes for higher recall values. Furthermore, we observe an impressive performance gain in recall from 20% to 60%, resulting from additionally probing the k-closest clusters instead Figure 3: Comparing the candidate set size with the Recall of LSH- with cluster-based tracking without the exact evaluation phase; The magnitude of the candidate set size represents the ratio between the number of candidate topics and the total number of topics; of just the closest one. While data dependent segmentation is expected to outperform LSH in terms of effectiveness, we were surprised by the magnitude of its impact on efficiency. The lack in effectiveness of LSH has a direct negative implication on its efficiency for tracking. In order to make up for its suboptimal space segmentation, it requires substantially bigger candidate sets to reach the same level of recall as the cluster-based approach. The size of the candidate set is critical because we assume a subsequent exact comparison phase to lower the false positive rate. The overhead of both algorithms is outweighed by the cost of exact comparison for the candidate set. Table 5, which compares the performance of the three algorithms, reveals a drastic reduction in runtime of up to 80%, at the cost of only a minor decrease in F1 score. The differences of 6% and 10% percent in F1 score are statistically not significant according to a sign test (p<=0.362 and p<=0.2). Consequently, both algorithms achieve substantial runtime reduction, while maintaining a level of effectiveness that is statistically indistinguishable from the traditional (exact) approach. 6.3 Tracking Wikipedia on Twitter in constant space Tracking a stream of topics in bounded space is highly application specific due to the deletion procedure. We know from previous studies (Nichols et al., 2012) that a topic’s popularity within Twitter fades away over time. We are interested in keeping currently active topics and delete those that attract the least number of recent documents. This set-up has the interesting aspect that the doc1771 Algorithm F1 score Dot Products Runtime (sec) traditional approach 0.217 2.3 ∗1014 1.5 ∗107 LSH-based tracking 0.196 (-10%) 1.4 ∗1014 (-39%) 8.0 ∗106 (-46%) cluster-based tracking 0.204 (-6%) 3.1 ∗1013 (-86%) 2.5 ∗106 (-83%) Table 5: Effectiveness and efficiency of LSH- and cluster-based tracking to the traditional approach Algorithm Space F1 score dot products runtime (sec) LSH-based tracking unbounded 0.196 1.4 ∗1014 8.0 ∗106 bounded 0.173 (-12%) 5.1 ∗1011 (-99%) 4.1 ∗104 (-99%) cluster-based tracking unbounded 0.204 3.1 ∗1013 2.5 ∗106 bounded 0.189 (-7%) 1.8 ∗1011 (-99%) 3.3 ∗104 (-98%) Table 6: Effectiveness and efficiency for tracking in bounded and unbounded space ument stream dictates the lifespan of each topic in the topic stream. Table 6 contains the results of cluster- and LSH-based tracking and compares them to their bounded versions using the same set up. Note that the hit in performance is solely defined by the amount of memory provided and therefore continuously adjustable. For this particular experiment, we chose an upper bound of 25k concurrent topics. The table represents a substantial drop in runtime, following the reduced search space, at a fairly low expense in effectiveness. Based on our observations, we hypothesise that significant topics are more likely to be discussed during random Twitter chatter than the average Wikipedia topic. It is interesting to notice that the runtime also indicates a lower overhead for LSH-based tracking in comparison with the cluster-based approach. This difference was hidden in the unbounded tracking experiments but carries now more weight. 7 Conclusion We extended traditional topic tracking by demonstrating that it is possible to track an unbounded stream of topics in constant space and time. We also presented two approaches to tracking, based on LSH and clustering that efficiently scale to a high number of topics and documents while maintaining a level of effectiveness that is statistically indistinguishable from an exact tracking system. While they trade gains in efficiency against a loss in effectiveness, we showed that cluster based tracking does so more efficiently due to more effective space segmentation, which allows a higher reduction of the search space. Contrary to common believes this showed how nearest neighbour search in data streams based on clustering performs faster than LSH, for the same level of accuracy. Furthermore, we showed that standard measures of similarity (cosine) are sub-optimal when tracking Wikipedia against Twitter. References James Allan, Victor Lavrenko, Daniella Malin, and Russell Swan. 2000. Detections, bounds, and timelines: Umass and tdt-3. In Proceedings of Topic Detection and Tracking Workshop, pages 167-174. James Allan, Ron Papka, and Victor Lavrenko. 1998. On-line new event detection and tracking. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval (SIGIR ’98). ACM, New York, NY, USA. James Allan. 2002. Topic Detection and Tracking: Event-Based Information Organization. Kluwer Academic Publishers, Norwell, MA, USA. Mario Cataldi, Luigi Di Caro, and Claudio Schifanella. 2010. Emerging topic detection on Twitter based on temporal and social terms evaluation. In Proceedings of the Tenth International Workshop on Multimedia Data Mining, pages 1-10. ACM. H. Becker, M. Naaman, and L. Gravano. 2009. Event Identification in Social Media. In 12th International Workshop on the Web and Databases (WebDB’09), Providence, USA. Moreno Carullo, Elisabetta Binaghi, Ignazio Gallo and Nicola Lamberti. 2008. ”Clustering of short commercial documents for the web.” Paper presented at the meeting of the ICPR. Moses S. Charikar. 2002. Similarity estimation techniques from rounding algorithms. In Proceedings of the thirty-fourth annual ACM symposium on Theory of computing (STOC ’02). ACM, New York, NY, USA. Eichmann, D. and P. Sirivasan. 1999. ”Filters, Webs and Answers: The University of Iowa TREC-8 Results” Eighth Conference on Text Retrieval, NIST, USA. 1772 Fiscus, J. G. and Doddington, G. R. 2002. Topic detection and tracking evaluation overview. Topic detection and tracking: event-based information organization, pages 17-31. Saptarshi Ghosh, Muhammad Bilal Zafar, Parantapa Bhattacharya, Naveen Sharma, Niloy Ganguly, and Krishna Gummadi. 2013. On sampling the wisdom of crowds: random vs. expert sampling of the twitter stream. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management (CIKM-13). New York, NY, USA. Aristides Gionis, Piotr Indyk, and Rajeev Motwani. 1999. Similarity Search in High Dimensions via Hashing. InProceedings of the 25th International Conference on Very Large Data Bases (VLDB ’99), San Francisco, CA, USA. Sayyadi Hassan, Hurst Matthew and Maykov Alexey. 2009. ”Event Detection and Tracking in Social Streams.” In Proceedings of the ICWSM, CA, USA. Yihong Hong, Yue Fei, and Jianwu Yang. 2013. Exploiting topic tracking in real-time tweet streams. In Proceedings of the 2013 international workshop on Mining unstructured big data using natural language processing. ACM, New York, NY, USA. Piotr Indyk and Rajeev Motwani. 1998. Approximate nearest neighbours: towards removing the curse of dimensionality. In Proceedings of the thirtieth annual ACM symposium on Theory of computing (STOC ’98). ACM, New York, NY, USA. Jimmy Lin, Rion Snow, and William Morgan. 2011. Smoothing techniques for adaptive online language models: topic tracking in tweet streams. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’11). ACM, New York, NY, USA, 422-429. S. Muthukrishnan. 2005. Data streams: Algorithms and applications. Now Publishers Inc. Jeffrey Nichols, Jalal Mahmud, and Clemens Drews. 2012. Summarizing sporting events using twitter. InProceedings of the 2012 ACM international conference on Intelligent User Interfaces (IUI ’12). ACM, New York, NY, USA. Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2010. Streaming first story detection with application to Twitter. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics (HLT ’10). Association for Computational Linguistics, Stroudsburg, PA, USA. Sasa Petrovic. 2013. Real-time event detection in massive streams. Ph.D. thesis, School of Informatics, University of Edinburgh. Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized Algorithms and NLP: Using Locality Sensitive Hash Functions for High Speed Noun Clustering. In Proceedings of ACL. Raymond K. Pon, Alfonso F. Cardenas, David Buttler, and Terence Critchlow. 2007. Tracking multiple topics for finding interesting articles. In Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’07). ACM, New York, NY, USA. I. Soboroff, I. Ounis, and J. Lin. 2012. Overview of the trec-2012 microblog track. In Proceedings of TREC. Jintao Tang, Ting Wang, Qin Lu, Ji Wang, and Wenjie Li. 2011. A Wikipedia based semantic graph model for topic tracking in blogosphere. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Three (IJCAI’11). TDT by NIST 1998-2004. http://www.itl.nist.gov/iad/mig/ tests/tdt/resources.html (Last Update: 2008) Jianshu Weng, Erwin Leonardi, Francis Lee. Event Detection in Twitter. 2011. In Proceeding of ICWSM. AAAI Press. Xintian Yang, Amol Ghoting, Yiye Ruan, and Srinivasan Parthasarathy. 2012. A framework for summarizing and analysing twitter feeds. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining (KDD ’12). ACM, New York, NY, USA. 1773
2015
170
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1774–1782, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Inducing Word and Part-of-Speech with Pitman-Yor Hidden Semi-Markov Models Kei Uchiumi Hiroshi Tsukahara Denso IT Laboratory, Inc. Shibuya Cross Tower 28F 2-15-1 Shibuya, Tokyo, Japan {kuchiumi,htsukahara}@d-itlab.co.jp Daichi Mochihashi The Institute of Statistical Mathematics 10-3 Midori-cho, Tachikawa city Tokyo, Japan [email protected] Abstract We propose a nonparametric Bayesian model for joint unsupervised word segmentation and part-of-speech tagging from raw strings. Extending a previous model for word segmentation, our model is called a Pitman-Yor Hidden SemiMarkov Model (PYHSMM) and considered as a method to build a class n-gram language model directly from strings, while integrating character and word level information. Experimental results on standard datasets on Japanese, Chinese and Thai revealed it outperforms previous results to yield the state-of-the-art accuracies. This model will also serve to analyze a structure of a language whose words are not identified a priori. 1 Introduction Morphological analysis is a staple of natural language processing for broad languages. Especially for some East Asian languages such as Japanese, Chinese or Thai, word boundaries are not explicitly written, thus morphological analysis is a crucial first step for further processing. Note that also in Latin and old English, scripts were originally written with no word indications (scripta continua), but people felt no difficulty reading them. Here, morphological analysis means word segmentation and part-of-speech (POS) tagging. For this purpose, supervised methods have often been employed for training. However, to train such supervised classifiers, we have to prepare a large amount of training data with correct annotations, in this case, word segmentation and POS tags. Creating and maintaining these data is not only costly but also very difficult, because generally there are no clear criteria for either “correct” segmentation or POS tags. In fact, since there are different standards for Chinese word segmentation, widely used SIGHAN Bakeoff dataset (Emerson, 2005) consists of multiple parts employing different annotation schemes. Lately, this situation has become increasingly important because there are strong demands for processing huge amounts of text in consumer generated media such as Twitter, Weibo or Facebook (Figure 1). They contain a plethora of colloquial expressions and newly coined words, including sentiment expressions such as emoticons that cannot be covered by fixed supervised data. To automatically recognize such linguistic phenomena beyond small “correct” supervised data, we have to extract linguistic knowledge from the statistics of strings themselves in an unsupervised fashion. Needless to say, such methods will also contribute to analyzing speech transcripts, classic texts, or even unknown languages. From a scientific point of view, it is worth while to find “words” and their part-of-speech purely from a collection of strings without any preconceived assumptions. To achieve that goal, there have been two kinds of approaches: heuristic methods and statistical generative models. Heuristic methods are based on basic observations such that word boundaries will often occur at the place where predictive entropy of characters is large (i.e. the next character cannot be predicted without assuming ローラのときに涙かブハァってなりました∩(´;ヮ; `) ∩~~ 真樹なんてこんな中2くさい事胸張って言えるぞぉ! 今日ね!らんらんとるいとコラボキャスするからお いで~(*´∀`) ノシ どうせ明日の昼ごろしれっと不在表入ってるんだろ うなぁ。 テレ東はいつものネトウヨホルホルVTR 鑑賞番組 してんのか Figure 1: Sample of Japanese Twitter text that is difficult to analyze by ordinary supervised segmentation. It contains a lot of novel words, emoticons, and colloquial expressions. 1774 the next word). By formulating such ideas as search or MDL problems of given coding length1, word boundaries are found in an algorithmic fashion (Zhikov et al., 2010; Magistry and Sagot, 2013). However, such methods have difficulty incorporating higher-order statistics beyond simple heuristics, such as word transitions, word spelling formation, or word length distribution. Moreover, they usually depends on tuning parameters like thresholds that cannot be learned without human intervention. In contrast, statistical models are ready to incorporate all such phenomena within a consistent statistical generative model of a string, and often prove to work better than heuristic methods (Goldwater et al., 2006; Mochihashi et al., 2009). In fact, the statistical methods often include the criteria of heuristic methods at least in a conceptual level, which is noted in (Mochihashi et al., 2009) and also explained later in this paper. In a statistical model, each word segmentation w of a string s is regarded as a hidden stochastic variable, and the unsupervised learning of word segmentation is formulated as a maximization of a probability of w given s: argmax w p(w|s) . (1) This means that we want the most “natural” segmentation w that have a high probability in a language model p(w|s). Lately, Chen et al. (2014) proposed an intermediate model between heuristic and statistical models as a product of character and word HMMs. However, these two models do not have information shared between the models, which is not the case with generative models. So far, these approaches only find word segmentation, leaving part-of-speech information behind. These two problems are not actually independent but interrelated, because knowing the part-of-speech of some infrequent or unknown word will give contextual clues to word segmentation, and vice versa. For example, in Japanese すもももももも can be segmented into not only すもも/も/もも/も (plum/too/peach/too), but also into すもも/もも/ もも(plum/peach/peach), which is ungrammatical. However, we could exclude the latter case 1For example, Zhikov et al. (2010) defined a coding length using character n-grams plus MDL penalty. Since this can be interpreted as a crude “likelihood” and a prior, its essence is similar but driven by a quite simplistic model. Character HPYLM Word HPYLM Figure 2: NPYLM represented in a hierarchical Chinese restaurant process. Here, a character ∞gram HPYLM is embedded in a word n-gram HPYLM and learned jointly during inference. if we leverage knowledge that a state sequence N/P/N/P is much more plausible in Japanese than N/N/N from the part-of-speech information. Sirts and Alum¨ae (2012) treats a similar problem of POS induction with unsupervised morphological segmentation, but they know the words in advance and only consider segmentation within a word. For this objective, we attempt to maximize the joint probability of words and tags: argmax w,z p(w, z|s) ∝p(w, z, s) (2) From the expression above, this amounts to building a generative model of a string s with words w and tags z along with an associated inference procedure. We solve this problem by extending previous generative model of word segmentation. Note that heuristic methods are never able to model the hidden tags, and only statistical generative models can accommodate this objective. This paper is organized as follows. In Section 2, we briefly introduce NPYLM (Mochihashi et al., 2009) on which our extension is based. Section 3 extends it to include hidden states to yield a hidden semi-Markov models (Murphy, 2002), and we describe its inference procedure in Section 4. We conduct experiments on some East Asian languages in Section 5. Section 6 discusses implications of our model and related work, and Section 7 concludes the paper. 2 Nested Pitman-Yor Language Model Our joint model of words and states is an extension of the Nested Pitman-Yor Language Model (Mochihashi et al., 2009) of a string, which in turn is an extension of a Bayesian n-gram language model called Hierarchical Pitman-Yor Language Model (HPYLM) (Teh, 2006). 1775 HPYLM is a nonparametric Bayesian model of n-gram distribution based on the Pitman-Yor process (Pitman and Yor, 1997) that generates a discrete distribution G as G ∼PY(G0, d, θ). Here, d is a discount factor, “parent” distribution G0 is called a base measure and θ controls how similar G is to G0 in expectation. In HPYLM, n-gram distribution Gn = {p(wt|wt−1 · · · wt−(n−1))} is assumed to be generated from the Pitman-Yor process Gn ∼PY(Gn−1, dn, θn) , (3) where the base measure Gn−1 is an (n−1)-gram distribution generated recursively in accordance with (3). Note that there are different Gn for each n-gram history h = wt−1 · · · wt−(n−1). When we reach the unigram G1 and need to use a base measure G0, i.e. prior probabilities of words, HPYLM usually uses a uniform distribution over the lexicon. However, in the case of unsupervised word segmentation, every sequence of characters could be a word, thus the size of the lexicon is unbounded. Moreover, prior probability of forming a word should not be uniform over all sequences of characters: for example, English words rarely begin with ‘gme’ but tend to end with ’-ent’ like in segment. To model this property, NPYLM assumes that word prior G0 is generated from character HPYLM to model a well-formedness of w. In practice, to avoid dependency on n in the character model, we used an ∞-gram VPYLM (Mochihashi and Sumita, 2008) in this research. Finally, NPYLM gives an n-gram probability of word w given a history h recursively by integrating out Gn, p(w|h) = c(w|h)−d·thw θ+c(h) + θ+d·th · θ+c(h) p(w|h′) , (4) where h′ is the shorter history of (n−1)-grams. c(w|h), c(h) = ∑ w c(w|h) are n-gram counts of w appearing after h, and thw, th · = ∑ w thw are associated latent variables explained below. In case the history h is already empty at the unigram, p(w|h′) = p0(w) is computed from the character ∞-grams for the word w=c1 · · · ck : p0(w) = p(c1 · · · ck) (5) = ∏k i=1 p(ci|ci−1 · · · c1) . (6) In practice, we further corrected (6) so that a word length follows a mixture of Poisson distributions. For details, see (Mochihashi et al., 2009). When we know word segmentation w of the data, the probability above can be computed by adding each n-gram count of w given h to the model, i.e. increment c(w|h) in accordance with a hierarchical Chinese restaurant process associated with HPYLM (Figure 2). When each n-gram count called a customer is inferred to be actually generated from (n−1)-grams, we send its proxy customer for smoothing to the parent restaurant and increment thw, and this process will recurse. Notice that if a word w is never seen in w, its proxy customer is eventually sent to the parent restaurant of unigrams. In that case2, w is decomposed to its character sequence c1 · · · ck and this is added to the character HPYLM in the same way, making it a little “clever” about possible word spellings. Inference Because we do not know word segmentation w beforehand, we begin with a trivial segmentation in which every sentence is a single word3. Then, we iteratively refine it by sampling a new word segmentation w(s) of a sentence s in a Markov Chain Monte Carlo (MCMC) framework using a dynamic programming, as is done with PCFG by (Johnson et al., 2007) shown in Figure 3 where we omit MH steps for computational reasons. Further note that every hyperparameter dn, θn of NPYLM can be sampled from the posterior in a Bayesian fashion, as opposed to heuristic methods that rely on a development set for tuning. For details, see Teh (2006). 3 Pitman-Yor Hidden Semi-Markov Models NPYLM is a complete generative model of a string, that is, a hierarchical Bayesian n-gram lanInput: a collection of strings S Add initial segmentation w(s) to Θ for j = 1 · · · J do for s in randperm (S) do Remove customers of w(s) from Θ Sample w(s) according to p(w|s, Θ) Add customers of w(s) to Θ end for Sample hyperparameters of Θ end for Figure 3: MCMC inference of NPYLM Θ. 2To be precise, this occurs whenever thw is incremented in the unigram restaurant. 3Note that a child first memorizes what his mother says as a single word and gradually learns the lexicon. 1776 zt−1 zt zt+1 wt−1 wt wt+1 | {z } Observation s · · · · · · · · · · · · Figure 4: Graphical model of PYHSMM in a bigram case. White nodes are latent variables, and the shaded node is the observation. We only observe a string s that is a concatenation of hidden words w1 · · · wT . guage model combining words and characters. It can also be viewed as a way to build a Bayesian word n-gram language model directly from a sequence of characters, without knowing “words” a priori. One possible drawback of it is a lack of part-ofspeech: as described in the introduction, grammatical states will contribute much to word segmentation. Also, from a computational linguistics point of view, it is desirable to induce not only words from strings but also their part-of-speech purely from the usage statistics (imagine applying it to an unknown language or colloquial expressions). In classical terms, it amounts to building a class ngram language model where both class and words are unknown to us. Is this really possible? Yes, we can say it is possible. The idea is simple: we augment the latent states to include a hidden part-of-speech zt for each word wt, which is again unknown as displayed in Figure 4. Assuming wt is generated from zt’-th NPYLM, we can draw a generative model of a string s as follows: z0 =BOS; s=ϵ (an empty string). for t = 1 · · · T do Draw zt ∼p(zt|zt−1) , Draw wt ∼p(wt|w1 · · · wt−1, zt) , Append wt to s . end for Here, z0 = BOS and zT+1 = EOS are distinguished states for beginning and end of a sentence, respectively. For the transition probability of hidden states, we put a HPY process prior as (Blunsom and Cohn, 2011): p(zt|zt−1) ∼HPY(d, θ) (7) with the final base measure being a uniform distribution over the states. The word boundaries are !"#! !!"#$"# %&'()*+,-.&-)/01-0.2*3'45# ! !! "! #! $! "! !!! !"! "#! #$! $"! $%&'()*+,-.( (((/! 0"#(1+'*2( (((((((3! #$"! "#$! !"#! !!"! 4"#! 516*(-! Figure 5: Graphical representation of sampling words and POSs. Each cell corresponds to an inside probability α[t][k][z]. Note each cell is not always connected to adjacent cells, because of an overlap of substrings associated with each cell. known in (Blunsom and Cohn, 2011), but in our case it is also learned from data at the same time. Note that because wt depends on already generated words w1 · · · wt−1, our model is considered as an autoregressive HMM rather than a vanilla HMM, as shown in Figure 4 (wt−1 →wt dependency). Since segment models like NPYLM have segment lengths as hidden states, they are called semiMarkov models (Murphy, 2002). In contrast, our model also has hidden part-of-speech, thus we call it a Pitman-Yor Hidden Semi-Markov model (PYHSMM).4 Note that this is considered as a generative counterpart of a discriminative model known as a hidden semi-Markov CRF (Sarawagi and Cohen, 2005). 4 Inference Inference of PYHSMM proceeds in almost the same way as NPYLM in Figure 3: For each sentence, first remove the customers associated with the old segmentation similarly to adding them. After sampling a new segmentation and states, the model is updated by adding new customers in accordance with the new segmentation and hidden states. 4.1 Sampling words and states To sample words and states (part-of-speech) jointly, we first compute inside probabilities forward from BOS to EOS and sample backwards from EOS according to the Forward filteringBackward sampling algorithm (Scott, 2002). This 4Lately, Johnson et al. (2013) proposed a nonparametric Bayesian hidden semi-Markov models for general state spaces. However, it depends on a separate distribution for a state duration, thus is clealy different from ours for a natural language. 1777 can be regarded as a “stochastic Viterbi” algorithm that has the advantage of not being trapped in local minima, since it is a valid move of a Gibbs sampler in a Bayesian model. For a word bigram case for simplicity, inside variable α[t][k][z] is a probability that a substring c1 · · · ct of a string s = c1 · · · cN is generated with its last k characters being a word, generated from state z as shown in Figure 5. From the definition of PYHSMM, this can be computed recursively as follows: α[t][k][z] = L ∑ j=1 K ∑ y=1 p(ct t−k|ct−k t−k−j+1, z) p(z|y)α[t−k][j][y] . (8) Here, ct s is a substring cs · · · ct and L (≤t) is the maximum length of a word, and K is the number of hidden states.5 In Figure 5, each cell represents α[t][k][z] and a single path connecting from EOS to BOS corresponds to a word sequence w and its state sequence z. Note that each cell is not always connected to adjacent cells (we omit the arrows), because the length-k substring associated with each cell already subsumes that of neighborhood cells. Once w and z are sampled, each wt is added to zt’-th NPYLM to update its statistics. 4.2 Efficient computation by the Negative Binomial generalized linear model Inference algorithm of PYHSMM has a computational complexity of O(K2L2N), where N is a length of the string to analyze. To reduce computations it is effective to put a small L of maximum word length, but it might also ignore occasionally long words. Since these long words are often predictable from some character level information including suffixes or character types, in a Type Feature ci Character at time t−i (0≤i≤1) ti Character type at time t−i (0≤i≤4) cont # of the same character types before t ch # of times character types changed within 8 characters before t Table 1: Features used for the Negative Binomial generalized linear model for maximum word length prediction. 5For computational reasons, we do not pursue using a Dirichlet process to yield an infinite HMM (Van Gael et al., 2009), but it is straightforward to extend our PYHSMM to iHMM. semi-supervised setting we employ a Negative Binomial generalized linear model (GLM) for setting Lt adaptively for each character position t in the corpus. Specifically, we model the word length ℓby a Negative Binomial distribution (Cook, 2009): ℓ∼NB(ℓ|r, p) = Γ(r+ℓ) Γ(r) ℓ! pℓ(1 −p)r . (9) This counts the number of failures of Bernoulli draws with probability (1−p) before r’th success. For our model, note that Negative Binomial is obtained from a Poisson distribution Po(λ) whose parameter λ again follows a Gamma distribution Ga(r, b) and integrated out: p(ℓ|r, b) = ∫ Po(ℓ|λ)Ga(λ|r, b)dλ (10) = Γ(r+ℓ) Γ(r) ℓ! ( b 1+b )ℓ( 1 1+b )r . (11) This construction exactly mirrors the PoissonGamma word length distribution in (Mochihashi et al., 2009) with sampled λ. Therefore, our Negative Binomial is basically a continuous analogue of the word length distribution in NPYLM.6 Since r > 0 and 0 ≤p ≤1, we employ an exponential and sigmoidal linear regression r = exp(wT r f), p = σ(wT p f) (12) where σ(x) is a sigmoid function and wr, wp are weight vectors to learn. f is a feature vector computed from the substring c1 · · · ct, including f0 ≡1 for a bias term. Table 1 shows the features we used for this Negative Binomial GLM. Since Negative Binomial GLM is not convex in wr and wp, we endow a Normal prior N(0, σ2I) for them and used a random walk MCMC for inference. Predicting Lt Once the model is obtained, we can set Lt adaptively as the time where the cumulative probability of ℓexceeds some threshold θ (we used θ = 0.99). Table 2 shows the precision of predicting maximum word length learned from 10,000 sentences from each set: it measures whether the correct word boundary in test data is included in the predicted Lt. Overall it performs very well with high precision, and works better for longer words that cannot be accommodated with a fixed maximum length. 6Because NPYLM employs a mixture of Poisson distributions for each character type of a substring, this correspondence is not exact. 1778 Lang Dataset Training Test Ja Kyoto corpus 37,400 1,000 BCCWJ OC 20,000 1,000 Zh SIGHAN MSR 86,924 3,985 SIGHAN CITYU 53,019 1,492 SIGHAN PKU 19,056 1,945 Th InterBEST Novel 1,000 1,000 Table 3: Datasets used for evaluation. Abbreviations: Ja=Japanese, Zh=Chinese, Th=Thai language. Figure 6 shows the distribution of predicted maximum lengths for Japanese. Although we used θ = 0.99, it is rather parsimonious but accurate that makes the computation faster. Because this cumulative Negative Binomial prediction is language independent, we believe it might be beneficial for other natural language processing tasks that require some maximum lengths within which to process the data. 5 Experiments To validate our model, we conducted experiments on several corpora of East Asian languages with no word boundaries. Datasets For East Asian languages, we used standard datasets in Japanese, Chinese and Thai as shown in Table 3. The Kyoto corpus is a collection of sentences from Japanese newspaper (Kurohashi and Nagao, 1998) with both word segmentation and part-of-speech annotations. BCCWJ (Balanced Corpus of Contemporary Written Japanese) is a balanced corpus of written Japanese (Maekawa, 2007) from the National Institute of Japanese Language and Linguistics, also with both word segmentation and part-ofspeech annotations from slightly different criteria. For experiments on colloquial texts, we used a random subset of “OC” register from this corpus that is comprised of Yahoo!Japan Answers from users. For Chinese, experiments are conducted on standard datasets of SIGHAN Bakeoff 2005 (Emerson, 2005); for comparison we used MSR and PKU datasets for simplified Chinese, and the CITYU dataset for traditional Chinese. SIGHAN datasets have word boundaries only, and we conformed to original training/test splits provided with the data. InterBEST is a dataset in Thai used in the InterBEST 2009 word segmentation contest (Kosawat, 2009). For contrastive purposes, we used a “Novel” subset of it with a random sampling without replacement for training and test data. Accuracies are measured in token F-measures computed as follows: F = 2PR P +R , (13) P = # of correct words # of words in output , (14) R = # of correct words # of words in gold standard . (15) Unsupervised word segmentation In Table 4, we show the accuracies of unsupervised word segmentation with previous figures. We used bigram PYHSMM and set L = 4 for Chinese, L = 5, 8, 10, 21 for Japanese with different types of contiguous characters, and L = 6 for Thai. The number of hidden states are K = 10 (Chinese and Thai), K =20 (Kyoto) and K =30 (BCCWJ). We can see that our PYHSMM outperforms on all the datasets. Huang and Zhao (2007) reports that the maximum possible accuracy in unsupervised Chinese word segmentation is 84.8%, derived through the inconsistency between different segmentation standards of the SIGHAN dataset. Our PYHSMM performs nearer to this best possible accuracy, leveraging both word and character knowledge in a consistent Bayesian fashion. Further note that in Thai, quite high performance is achieved with a very small data compared to previous work. Unsupervised part-of-speech induction As stated above, Kyoto, BCCWJ and Weibo datasets Dataset Kyoto BCCWJ MSR CITYU BEST Precision (All) 99.9 99.9 99.6 99.9 99.0 Precision (≥5) 96.7 98.4 73.6 87.0 91.7 Maximum length 15 48 23 12 21 Table 2: Precision of maximum word length prediction with a Negative Binomial generalized linear model (in percent). ≥5 are figures for word length ≥5. Final row is the maximum length of a word found in each dataset. 0 2000 4000 6000 8000 10000 12000 14000 2 4 6 8 10 12 14 16 Frequency L Figure 6: Distribution of predicted maximum word lengths on the Kyoto corpus. 1779 Dataset PYHSMM NPY BE HMM2 Kyoto 71.5 62.1 71.3 NA BCCWJ 70.5 NA NA NA MSR 82.9 80.2 78.2 81.7 CITYU 82.6∗ 82.4 78.7 NA PKU 81.6 NA 80.8 81.1 BEST 82.1 NA 82.1 NA Table 4: Accuracies of unsupervised word segmentation. BE is a Branching Entropy method of Zhikov et al. (2010), and HMM2 is a product of word and character HMMs of Chen et al. (2014). ∗is the accuracy decoded with L = 3: it becomes 81.7 with L=4 as MSR and PKU. have part-of-speech annotations as well. For these data, we also evaluated the precision of part-ofspeech induction on the output of unsupervised word segmentation above. Note that the precision is measured only over correct word segmentation that the system has output. Table 5 shows the precisions; to the best of our knowledge, there are no previous work on joint unsupervised learning of words and tags, thus we only compared with Bayesian HMM (Goldwater and Griffiths, 2007) on both NPYLM segmentation and gold segmentation. In this evaluation, we associated each tag of supervised data with a latent state that cooccurred most frequently with that tag. We can see that the precision of joint POS tagging is better than NPYLM+HMM, and even better than HMM that is run over the gold segmentation. For colloquial Chinese, we also conducted an experiment on the Leiden Weibo Corpus (LWC), a corpus of Chinese equivalent of Twitter7. We used random 20,000 sentences from this corpus, and results are shown in Figure 7. In many cases plausible words are found, and assigned to syntactically consistent states. States that are not shown here are either just not used or consists of a mixture of different syntactic categories. Guiding our model to induce more accurate latent states is a common problem to all unsupervised part-of-speech induction, but we show some semi-supervised results next. Dataset PYHSMM NPY+HMM HMM Kyoto 57.4 53.8 49.5 BCCWJ 50.2 44.1 44.2 LWC 33.0 30.9 32.9 Table 5: Precision of POS tagging on correctly segmented words. 7http://lwc.daanvanesch.nl/ Semi-supervised experiments Because our PYHSMM is a generative model, it is easily amenable to semi-supervised segmentation and tagging. We used random 10,000 sentences from supervised data on Kyoto, BCCWJ, and LWC datasets along with unsupervised datasets in Table 3. Results are shown in Table 6: segmentation accuracies came close to 90% but do not go beyond. By inspecting the segmentation and POS that PYHSMM has output, we found that this is not necessarily a fault of our model, but it came from the often inconsistet or incorrect tagging of the dataset. In many cases PYHSMM found more “natural” segmentations, but it does not always conform to the gold annotations. On the other hand, it often oversegments emotional expressions (sequence of the same character, for example) and this is one of the major sources of errors. Finally, we note that our proposed model for unsupervised learning is most effective for the language which we do not know its syntactic behavior but only know raw strings as its data. In Figure 8, we show an excerpt of results to model a Japanese local dialect (Mikawa-ben around Nagoya district) collected from a specific Twitter. Even from the surface appearance of characters, we can see that similar words are assigned to the same state including some emoticons (states 9,29,32), and in fact we can identify a state of postpositions specific to that dialect (state 3). Notice that the words themselves are not trivial before this analysis. There are also some name of local places (state 41) and general Japanese postpositions (2) or nouns (11,18,25,27,31). Because of the sparsity promoting prior (7) over the hidden states, actually used states are sparse and the results can be considered quite satisfactory. 6 Discussion The characteristics of NPYLM is a Baysian integration of character and word level information, which is related to (Blunsom and Cohn, 2011) and the adaptor idea of (Goldwater et al., 2011). This Dataset Seg POS Kyoto 92.1 87.1 BCCWJ 89.4 83.1 LWC 88.5 86.9 Table 6: Semi-supervised segmentation and POS tagging accuracies. POS is measured by precision. 1780 z =1 z =3 z =10 z =11 z =18 啦 227 呀 182 去 86 开心 65 走 62 哈 53 鸟 44 喽 41 波 31 测试 30 。 3309 ! 1901 了 482 啊 226 呢 110 哦 93 啦 69 哈哈 56 地址 47 晚安 43 , 13440 # 5989 的 5224 。 3237 我 1504 是 1206 ! 1190 在 900 都 861 和 742 可以 207 呢 201 。 199 那么 192 多 192 打 177 才 167 比 165 对 154 几 146 东 68 大 60 南 59 , 55 西 53 路 51 海 49 山 49 区 45 去 39 1 Figure 7: Some interesting words and states induced from Weibo corpus (K = 20). Numbers represent frequencies that each word is generated from that class. Although not perfect, emphatic (z = 1), endof-sentence expressions (z = 3), and locative words (z = 18) are learned from tweets. Distinction is far more clear in the semi-supervised experiments (not shown here). z Induced words 2 の、はにがでともを「 3 ぞんかんねのんだにだんりんかんだのん 9 (*ˆˆ*) !(ˆ-ˆ; (ˆ_ˆ;) (ˆˆ;; !(ˆˆ;; 10 。!!!?」(≧∇≦)!!」「 11 楽入ど寒大丈夫会受停電良美味台風が 13 にらわなよねだらじゃんねえぁ 18 今年最近豊川地元誰豊田今度次豊川高校 19 さんんめ食べってよろしくありがとうじゃん 20 これ知人それどこまあみんな東京いや方 24 三河弁このよお何そほい今日またほ 25 他一緒5大変頭春参加指世代地域 26 マジ豊橋カレーコレトキワコーヒープロファン 27 行」方言& 言葉普通夜店」始確認 29 ( !(; (´・!!(*`?(´・(*ˆ_ˆ*) 30 気うち店ほうこここっち先生友人いろいろ 31 女子無理決近い安心標準語感動蒲郡試合 32 ( (*\(ˆ \(ˆ (ˆ !*\(ˆ ~(ˆ_ˆ (*ˆ 34 ヤマサマーラオレハイジイメージクッピーラムネ 35 なーそう好きことらんなんらみ意味 36 いいどうまい杏果ぐろめっちゃかわいはよ 41 豊橋名古屋三河西三河名古屋弁名古屋人大阪 Figure 8: Unsupervised analysis of a Japanese local dialect by PYHSMM. (K =50) is different from (and misunderstood in) a joint model of Chen et al. (2014), where word and character HMMs are just multiplied. There are no information shared from the model structure, and in fact it depends on a BIO-like heuristic tagging scheme in the character HMM. In the present paper, we extended it to include a hidden state for each word. Therefore, it might be interesting to introduce a hidden state also for each character. Unlike western languages, there are many kinds of Chinese characters that work quite differently, and Japanese uses several distinct kinds of characters, such as a Chinese character, Hiragana, Katakana, whose mixture would constitute a single word. Therefore, statistical modeling of different types of characters is an important research venue for the future. NPYLM has already applied and extended to speech recognition (Neubig et al., 2010), statistical machine translation (Nguyen et al., 2010), or even robotics (Nakamura et al., 2014). For all these research area, we believe PYHSMM would be beneficial for their extension. 7 Conclusion In this paper, we proposed a Pitman-Yor Hidden Semi-Markov model for joint unsupervised word segmentation and part-of-speech tagging on a raw sequence of characters. It can also be viewed as a way to build a class n-gram language model directly on strings, without any “word” information a priori. We applied our PYHSMM on several standard datasets on Japanese, Chinese and Thai, and it outperformed previous figures to yield the state-ofthe-art results, as well as automatically induced word categories. It is especially beneficial for colloquial text, local languages or speech transcripts, where not only words themselves are unknown but their syntactic behavior is a focus of interest. In order to adapt to human standards given in supervised data, it is important to conduct a semisupervised learning with discriminative classifiers. Since semi-supervised learning requires generative models in advance, our proposed Bayesian generative model will also lay foundations to such an extension. References Phil Blunsom and Trevor Cohn. 2011. A Hierarchical Pitman-Yor Process HMM for Unsupervised Part of Speech Induction. In ACL 2011, pages 865–874. 1781 Miaohong Chen, Baobao Chang, and Wenzhe Pei. 2014. A Joint Model for Unsupervised Chinese Word Segmentation. In EMNLP 2014, pages 854– 863. John D. Cook. 2009. Notes on the Negative Binomial Distribution. http://www.johndcook.com/ negative binomial.pdf. Tom Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. Sharon Goldwater and Tom Griffiths. 2007. A Fully Bayesian Approach to Unsupervised Part-of-Speech Tagging. In Proceedings of ACL 2007, pages 744– 751. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2006. Contextual Dependencies in Unsupervised Word Segmentation. In Proceedings of ACL/COLING 2006, pages 673–680. Sharon Goldwater, Thomas L. Griffiths, and Mark Johnson. 2011. Producing Power-Law Distributions and Damping Word Frequencies with TwoStage Language Models. Journal of Machine Learning Research, 12:2335–2382. Chang-Ning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8–20. Matthew J. Johnson and Alan S. Willsky. 2013. Bayesian Nonparametric Hidden Semi-Markov Models. Journal of Machine Learning Research, 14:673–701. Mark Johnson, Thomas L. Griffiths, and Sharon Goldwater. 2007. Bayesian Inference for PCFGs via Markov Chain Monte Carlo. In Proceedings of HLT/NAACL 2007, pages 139–146. Krit Kosawat. 2009. InterBEST 2009: Thai Word Segmentation Workshop. In Proceedings of 2009 Eighth International Symposium on Natural Language Processing (SNLP2009), Thailand. Sadao Kurohashi and Makoto Nagao. 1998. Building a Japanese Parsed Corpus while Improving the Parsing System. In Proceedings of LREC 1998, pages 719–724. http://nlp.kuee.kyoto-u.ac.jp/nl-resource/ corpus.html. Kikuo Maekawa. 2007. Kotonoha and BCCWJ: Development of a Balanced Corpus of Contemporary Written Japanese. In Corpora and Language Research: Proceedings of the First International Conference on Korean Language, Literature, and Culture, pages 158–177. Pierre Magistry and Benoˆıt Sagot. 2013. Can MDL Improve Unsupervised Chinese Word Segmentation? In Proceedings of the Seventh SIGHAN Workshop on Chinese Language Processing, pages 2–10. Daichi Mochihashi and Eiichiro Sumita. 2008. The Infinite Markov Model. In Advances in Neural Information Processing Systems 20 (NIPS 2007), pages 1017–1024. Daichi Mochihashi, Takeshi Yamada, and Naonori Ueda. 2009. Bayesian Unsupervised Word Segmentation with Nested Pitman-Yor Language Modeling. In Proceedings of ACL-IJCNLP 2009, pages 100–108. Kevin Murphy. 2002. Hidden semi-Markov models (segment models). http://www.cs.ubc.ca/˜murphyk/ Papers/segment.pdf. Tomoaki Nakamura, Takayuki Nagai, Kotaro Funakoshi, Shogo Nagasaka, Tadahiro Taniguchi, and Naoto Iwahashi. 2014. Mutual Learning of an Object Concept and Language Model Based on MLDA and NPYLM. In 2014 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS’14), pages 600–607. Graham Neubig, Masato Mimura, Shinsuke Mori, and Tatsuya Kawahara. 2010. Learning a Language Model from Continuous Speech. In Proc. of INTERSPEECH 2010. ThuyLinh Nguyen, Stephan Vogel, and Noah A. Smith. 2010. Nonparametric Word Segmentation for Machine Translation. In COLING 2010, pages 815– 823. Jim Pitman and Marc Yor. 1997. The Two-Parameter Poisson-Dirichlet Distribution Derived from a Stable Subordinator. Annals of Probability, 25(2):855– 900. Sunita Sarawagi and William W. Cohen. 2005. SemiMarkov Conditional Random Fields for Information Extraction. In Advances in Neural Information Processing Systems 17 (NIPS 2004), pages 1185–1192. Steven L. Scott. 2002. Bayesian Methods for Hidden Markov Models. Journal of the American Statistical Association, 97:337–351. Kairit Sirts and Tanel Alum¨ae. 2012. A Hierarchical Dirichlet Process Model for Joint Part-of-Speech and Morphology Induction. In NAACL 2012, pages 407–416. Yee Whye Teh. 2006. A Bayesian Interpretation of Interpolated Kneser-Ney. Technical Report TRA2/06, School of Computing, NUS. Jurgen Van Gael, Andreas Vlachos, and Zoubin Ghahramani. 2009. The infinite HMM for unsupervised PoS tagging. In EMNLP 2009, pages 678– 687. Valentin Zhikov, Hiroya Takamura, and Manabu Okumura. 2010. An Efficient Algorithm for Unsupervised Word Segmentation with Branching Entropy and MDL. In EMNLP 2010, pages 832–842. 1782
2015
171
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1783–1792, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Coupled Sequence Labeling on Heterogeneous Annotations: POS Tagging as a Case Study Zhenghua Li, Jiayuan Chao, Min Zhang∗, Wenliang Chen (1) Soochow University (2) Collaborative Innovation Center of Novel Software Technology and Industrialization Jiangsu Province, China {zhli13,minzhang,wlchen}@suda.edu.cn; china [email protected] Abstract In order to effectively utilize multiple datasets with heterogeneous annotations, this paper proposes a coupled sequence labeling model that can directly learn and infer two heterogeneous annotations simultaneously, and to facilitate discussion we use Chinese part-ofspeech (POS) tagging as our case study. The key idea is to bundle two sets of POS tags together (e.g. “[NN, n]”), and build a conditional random field (CRF) based tagging model in the enlarged space of bundled tags with the help of ambiguous labelings. To train our model on two non-overlapping datasets that each has only one-side tags, we transform a one-side tag into a set of bundled tags by considering all possible mappings at the missing side and derive an objective function based on ambiguous labelings. The key advantage of our coupled model is to provide us with the flexibility of 1) incorporating joint features on the bundled tags to implicitly learn the loose mapping between heterogeneous annotations, and 2) exploring separate features on one-side tags to overcome the data sparseness problem of using only bundled tags. Experiments on benchmark datasets show that our coupled model significantly outperforms the state-ofthe-art baselines on both one-side POS tagging and annotation conversion tasks. The codes and newly annotated data are released for non-commercial usage.1 ∗Correspondence author. 1http://hlt.suda.edu.cn/˜zhli 1 Introduction The scale of available labeled data significantly affects the performance of statistical data-driven models. As a widely-used structural classification problem, sequence labeling is prone to suffer from the data sparseness issue. However, the heavy cost of manual annotation typically limits one labeled resource in both scale and genre. As a promising research line, semi-supervised learning for sequence labeling has been extensively studied. Huang et al. (2009) show that standard self-training can boost the performance of a simple hidden Markov model (HMM) based part-of-speech (POS) tagger. Søgaard (2011) apply tri-training to English POS tagging, boosting accuracy from 97.27% to 97.50%. Sun and Uszkoreit (2012) derive word clusters from largescale unlabeled data as extra features for Chinese POS tagging. Recently, the use of natural annotation has becomes a hot topic in Chinese word segmentation (Jiang et al., 2013; Liu et al., 2014; Yang and Vozila, 2014). The idea is to derive segmentation boundaries from implicit information encoded in web texts, such as anchor texts and punctuation marks, and use them as partially labeled training data in sequence labeling models. The existence of multiple annotated resources opens another door for alleviating data sparseness. For example, Penn Chinese Treebank (CTB) contains about 20 thousand sentences annotated with word boundaries, POS tags, and syntactic structures (Xue et al., 2005), which is widely used for research on Chinese word segmentation and POS tagging. People’s Daily corpus (PD)2 is a large-scale corpus annotated with word segments and POS tags, containing about 300 thousand sentences from the first half of 1998 of People’s 2http://icl.pku.edu.cn/icl_groups/ corpustagging.asp 1783 ѣള1 䠃㿼2 ਇኋ4 China focuses on economic development ᡇള1 ཝ࣑2 ਇኋ3 ᮏ㛨4 Our nation strongly develops education [VV,v] [VE,v] [VC,v] [VA,v] Bundled tags [NN,n] [NN,Ng] [NN,vn] 㔅⎄3 Figure 1: An example to illustrate the annotation differences between CTB (above) and PD (below), and how to transform a one-side tag into a set of bundled tags. “NN” and “n” represent nouns; “VV”and “v” represent verbs. Daily newspaper (see Table 2). The two resources were independently built for different purposes. CTB was designed to serve syntactic analysis, whereas PD was developed to support information extraction systems. However, the key challenge of exploiting the two resources is that they adopt different sets of POS tags which are impossible to be precisely converted from one to another based on heuristic rules. Figure 1 shows two example sentences from CTB and PD. Please refer to Table B.3 in Xia (2000) for detailed comparison of the two guidelines. Previous work on exploiting heterogeneous data (CTB and PD) mainly focuses on indirect guidefeature based methods. The basic idea is to use one resource to generate extra guide features on another resource (Jiang et al., 2009; Sun and Wan, 2012), which is similar to stacked learning (Nivre and McDonald, 2008). First, PD is used as source data to train a source model TaggerPD. Then, TaggerPD generates automatic POS tags on the target data CTB, called source annotations. Finally, a target model TaggerCTB-guided is trained on CTB, using source annotations as extra guide features. Although the guide-feature based method is effective in boosting performance of the target model, we argue that it may have two potential drawbacks. First, the target model TaggerCTB-guided does not directly use PD as training data, and therefore fails to make full use of rich language phenomena in PD. Second, the method is more complicated in real applications since it needs to parse a test sentence twice to get the final results. This paper proposes a coupled sequence labeling model that can directly learn and infer two heterogeneous annotations simultaneously. We use Chinese part-of-speech (POS) tagging as our case study.3 The key idea is to bundle two sets of POS tags together (e.g. “[NN, n]”), and build a conditional random field (CRF) based tagging model in the enlarged space of bundled tags. To make use of two non-overlapping datasets that each has only one-side tags, we transform a oneside tag into a set of bundled tags by considering all possible mappings at the missing side and derive an objective function based on ambiguous labelings. During training, the CRF-based coupled model is supervised by such ambiguous labelings. The advantages of our coupled model are to provide us the flexibility of 1) incorporating joint features on the bundled tags to implicitly learn the loose mapping between two sets of annotations, and 2) exploring separate features on one-side tags to overcome the data sparseness problem of using bundled tags. In summary, this work makes two major contributions: 1. We propose a coupled model which can more effectively make use of multiple resources with heterogeneous annotations, compared with both the baseline and guide-feature based method. Experiments show our approach can significantly improve POS tagging accuracy from 94.10% to 95.00% on CTB. 2. We have manually annotated CTB tags for 1, 000 PD sentences, which is the first dataset with two-side annotations and can be used for annotation-conversion evaluation. Experiments on the newly annotated data show that our coupled model also works effectively on the annotation conversion task, improving conversion accuracy from 90.59% to 93.90% (+3.31%). 2 Traditional POS Tagging (TaggerCTB) Given an input sentence of n words, denoted by x = w1...wn, POS tagging aims to find an optimal tag sequence t = t1...tn, where ti ∈T (1 ≤i ≤ n) and T is a predefined tag set. As a log-linear probabilistic model (Lafferty et al., 2001), CRF 3There are some slight differences in the word segmentation guidelines between CTB and PD, which are ignored in this work for simplicity. 1784 01: ti ◦ti−1 02: ti ◦wi 03: ti ◦wi−1 04: ti ◦wi+1 05: ti ◦wi ◦ci−1,−1 06: ti ◦wi ◦ci+1,0 07: ti ◦ci,0 08: ti ◦ci,−1 09: ti ◦ci,k, 0 < k < #ci −1 10: ti ◦ci,0 ◦ci,k, 0 < k < #ci −1 11: ti ◦ci,−1 ◦ci,k, 0 < k < #ci −1 12: if #ci = 1 then ti ◦wi ◦ci−1,−1 ◦ci+1,0 13: if ci,k = ci,k+1 then ti ◦ci,k ◦“consecutive” 14: ti ◦prefix(wi, k), 1 ≤k ≤4, k ≤#ci 15: ti ◦suffix(wi, k), 1 ≤k ≤4, k ≤#ci Table 1: POS tagging features f(x, i, ti−1, ti). ◦ means string concatenation; ci,k denotes the kth Chinese character of wi; ci,0 is the first Chinese character; ci,−1 is the last Chinese character; #ci is the total number of Chinese characters contained in wi; prefix/suffix(wi, k) denote the kCharacter prefix/suffix of wi. defines the probability of a tag sequence as: P(t|x; θ) = exp(Score(x, t; θ)) P t′ exp(Score(x, t′; θ)) Score(x, t; θ) = X 1≤i≤n θ · f(x, i, ti−1, ti) (1) where f(x, i, ti−1, ti) is the feature vector at the ith word and θ is the weight vector. We adopt the state-of-the-art tagging features in Table 1 (Zhang and Clark, 2008). 3 Coupled POS Tagging (TaggerCTB&PD) In this section, we introduce our coupled model, which is able to learn and predict two heterogeneous annotations simultaneously. The idea is to bundle two sets of POS tags together and let the CRF-based model work in the enlarged tag space. For example, a CTB tag “NN” and a PD tag “n” would be bundled into “[NN,n]”. Figure 2 shows the graphical structure of our model. Different from the traditional model in Eq. (1), our coupled model defines the score of a bundled tag sequence as follows: Score(x, [ta, tb]; θ) = X 1≤i≤n θ ·   f(x, i, [ta i−1, tb i−1], [ta i , tb i]) f(x, i, ta i−1, ta i ) f(x, i, tb i−1, tb i)   (2) where the first item of the enlarged feature vector is called joint features, which can be obtained by w1 wi-1 wi wn ... ... Figure 2: Graphical structure of our coupled CRF model. instantiating Table 1 by replacing ti with bundled tags [ta i , tb i]; the second and third items are called separate features, which are based on single-side tags. The advantages of our coupled model over the traditional model are to provide us with the flexibility of using both kinds of features, which significantly contributes to the accuracy improvement as shown in the following experiments. 3.1 Mapping Functions The key challenge of our idea is that both CTB and PD are non-overlapping and each contains only one-side POS tags. Therefore, the problem is how to construct training data for our coupled model. We denote the tag set of CTB as T a, and that of PD as T b, and the bundled tag set as T a&b. Since the full Cartetian T a × T b would lead to a very large number of bundled tags, making the model very slow, we would like to come up with a much smaller T a&b ⊆T a × T b, based on linguistic insights of the annotation guidelines of the two datasets. To obtain a proper T a&b, we introduce a mapping function between the two sets of tags as m : T a × T b →{0, 1}, which only allow specific tag pairs to be bundled together. m(ta, tb) = ( 1 if the two tags can be bundled 0 otherwise (3) where one mapping function m corresponds to one T a&b. When the mapping function becomes looser, the tag set size |T a&b| becomes larger. Then, based on the mapping function, we can map a single-side POS tag into a set of bundled tags by considering all possible tags at the missing side, as illustrated in Figure 1. The word “Ñ U4” is tagged as “NN” at the CTB side. Suppose that the mapping function m tells that “NN” can be mapped into three tags at the PD side, i.e., “n”, “Ng”, and “vn”. Then, we create three bundled tags for the word, i.e., “[NN, n]”, “[NN, Ng]”, 1785 “[NN, vn]” as its gold-standard references during training. It is known as ambiguous labelings when a training instance has multiple gold-standard labels. Similarly, we can obtain bundled tags for all other words in sentences of CTB and PD. After such transformation, the two datasets are now in the same tag space. At the beginning of this work, our intuition is that the coupled model would achieve the best performance if we build a tight and linguistically motivated mapping function. However, our preliminary experiments show that our intuitive assumption is actually incorrect. Therefore, we experiment with the following four mapping functions to manage to figure out the reasons behind and to better understand our coupled model. • The tight mapping function produces 145 tags, and is constructed by strictly following linguistic principles and our careful study of the two guidelines and datasets. • The relaxed mapping function results in 179 tags, which is an looser version of the tight mapping function by including extra 34 weak mapping relationships. • The automatic mapping function generates 346 tags. We use the baseline TaggerCTB to parse PD, and collect all automatic mapping relationships. • The complete mapping function obtains 1, 254 tags (|T a| × |T b| = 33 × 38). 3.2 Training Objective with Ambiguous Labelings So far, we have formally defined a coupled model and prepared both CTB and PD in the same bundled tag space. The next problem is how to learn the model parameters θ. Note that after our transformation, a sentence in CTB or PD have many tag sequences as gold-standard references due to the loose mapping function, known as ambiguous labelings. Here, we derive a training objective based on ambiguous labelings. For simplicity, we illustrate the idea based on the notations of the baseline CRF model in Eq. (1). Given a sentence x, we denote a set of ambiguous tag sequences as V. Then, the probability of V is the sum of probabilities of all tag sequences contained in V: p(V|x; θ) = X t∈V p(t|x; θ) (4) Algorithm 1 SGD training with two labeled datasets. 1: Input: Two labeled datasets: D(1) = {(x(1) i , V(1) i }N i=1, D(2) = {(x(2) i , V(2) i )}M i=1; Parameters: I, N ′, M′, b 2: Output: θ 3: Initialization: θ0 = 0, k = 0; 4: for i = 1 to I do {iterations} 5: Randomly select N ′ instances from D(1) and M′ instances from D(2) to compose a new dataset Di, and shuffle it. 6: Traverse Di, and use a small batch Db k ⊆ Di at one step. 7: θk+1 = θk + ηk 1 b∇L(Db k; θk) 8: k = k + 1 9: end for Suppose the training data is D = {(xi, Vi)}N i=1. Then the log likelihood is: L(D; θ) = N X i=1 log p(Vi|xi; θ) (5) After derivation, the gradient is: ∂L(D; θ) ∂θ = N X i=1 (Et∈Vi[f(xi, t)] −Et[f(xi, t)]) (6) where f(xi, t) is an aggregated feature vector for tagging xi as t; Et∈Vi[.] means model expectation of the features in the constrained space of Vi; Et[.] is model expectation with no constraint. This function can be efficiently solved by the forward-backward algorithm. Please note that the training objective of a traditional CRF model can be understood as a special case where Vi contains one sequence. 3.3 SGD Training with Two Datasets We adopt stochastic gradient descent (SGD) to iteratively learn θ for our baseline and coupled models. However, we have two separate training data, and CTB may be overwhelmed by PD if directly merging the two datasets into one, since PD is 15 times larger than CTB (see Table 2), Therefore, we propose a simple corpus-weighting strategy, as shown in Algorithm 1, where Db k is a subset of training data used in kth step update; b is the batch size; ηk is a update step. The idea is to randomly sample instances from each training data in a certain proportion before each iteration. 1786 The sampled data is then used for one-iteration training. Later experiments will investigate the effect of the weighting proportion. In this work, we use b = 30, and follow the implementation in CRFsuite4 to decide ηk. 4 Manually Annotating PD Sentences with CTB Tags To evaluate different methods on annotation conversion, we build the first dataset that contains 1, 000 sentences with POS tags on both sides of CTB and PD. The sentences are randomly sampled from PD. To save annotation effort, we only select 20% most difficult tokens to manually annotate. The difficulty of a word wi is measured based on marginal probabilities produced by the baseline TaggerCTB. p(ti|x, wi; θ) denotes the marginal probability of tagging wi as ti. The basic assumption is that wi is more difficult to annotate if its most likely tag candidate (arg maxt p(t|x, wi; θ)) gets lower marginal probability. We build a visualized online annotation system to facilitate manual labeling. The annotation task is designed in such way that at a time an annotator is provided with a sentence and one focus word, and is required to decide the CTB POS tag of the word. To further simplify annotation, we provide two or three most likely tag candidates as well, so that annotators can choose one either among the candidates or from a full list. We employ 8 undergraduate students as our annotators. Annotators are trained on simulated tasks from CTB data for several hours, and and start real annotation once reaching certain accuracy. To guarantee annotation quality, we adopt multiple annotation. Initially, one task is randomly assigned to two annotators. Later, if the two annotators submit different results, the system will assign the task to two more annotators. To aggregate annotation results, we only retain annotation tasks that the first two annotators agree (91.0%) or three annotators among four agree (5.6%), and discard other tasks (3.4%). Finally, we obtain 5, 769 words with both CTB and PD tags, with each annotator’s detailed submissions, and could be used as a non-synthesized dataset for studying aggregating submissions from non-expert annotators in crowdsourcing platforms (Qing et al., 2014). The data is also fully released for non-commercial usage. 4http://www.chokkan.org/software/ crfsuite/ 5 Experiments In this section, we conduct experiments to verify the effectiveness of our approach. We adopt CTB (version 5.1) with the standard data split, and randomly split PD into four sets, among which one set is 20% partially annotated with CTB tags. The data statistics is shown in Table 2. The main concern of this work is to improve accuracy on CTB by exploring large-scale PD, since CTB is relatively small, but is widely-used benchmark data in the research community. We use the standard token-wise tagging accuracy as the evaluation metric. For significance test, we adopt Dan Bikel’s randomized parsing evaluation comparator (Noreen, 1989).5. The baseline CRF is trained on either CTB training data with 33 tags, or PD training data with 38 tags. The coupled CRF is trained on both two separate training datasets with bundled tags (179 tags for the relaxed mapping function). During evaluation, the coupled CRF is not directly evaluated on bundled tags, since bundled tags are unavailable in either CTB or PD test data. Instead, the coupled and baseline CRFs are both evaluated on one-side tags. 5.1 Model Development Our coupled model has two major parameters to be decided. The first parameter is to determine the mapping function between CTB and PD annotations, and the second parameter is the relative weights of the two datasets during training (N ′ vs. M′: number of sentences in each dataset used for training at one iteration). Effect of mapping functions (described in Subsection 3.1) is illustrated in Figure 3. Empirically, we adopt N ′ = 5K vs. M′ = 20K to merge the two training datasets at each iteration. Our intuition is that using this proportion, CTB should not be overwhelmed by PD, and both training data can be used up in relatively similar speed. Specifically, all training data of CTB can be consumed in about 3 iterations, whereas PD can be consumed in about 14 iterations. We also present the results of the baseline model trained using 5K sentences in one iteration for better comparison. Contrary to our intuitive assumption, it actually leads to very bad performance when using the 5http://www.cis.upenn.edu/˜dbikel/ software.html 1787 #sentences #tokens with CTB tags #tokens with PD tags CTB train 16,091 437,991 – dev 803 20,454 – test 1,910 50,319 – PD train 273,883 – 6,488,208 dev 1,000 – 23,427 test 2,500 – 58,301 newly labeled 1,000 5,769 27,942 Table 2: Data statistics. Please kindly note that the 1, 000 sentences originally from PD are only partially annotated with CTB tags (about 20% most ambiguous tokens). 92 92.5 93 93.5 94 94.5 95 95.5 1 11 21 31 41 51 61 71 81 91 Accuracy on CTB-dev (%) Iteration Number Complete Automatic Relaxed Tight Baseline:CTB(5K) Figure 3: Accuracy on CTB-dev regarding to mapping functions. tight mapping function that is carefully created based on linguistic insights, which is even inferior to the baseline model. The relaxed mapping function outperforms the tight function by large margin. The automatic function works slightly better than the relaxed one. The complete function achieves similar accuracy with the automatic one. In summary, we can conclude that our coupled model achieves much better performance when the mapping function becomes looser. In other words, this suggests that our coupled model can effectively learn the implicit mapping between heterogeneous annotations, and does not rely on a carefully designed mapping function. Since a looser mapping function leads to a larger number of bundled tags and makes the model slower, we implement a paralleled training procedure based on Algorithm 1, and run each experiment with five threads. However, it still takes about 20 hours for one iteration when using the complete mapping function; whereas the other three mapping functions need about 6, 2, and 1 hours respectively. Therefore, as a compromise, we adopt the relaxed mapping function in the fol 92 92.5 93 93.5 94 94.5 95 1 31 61 91 121 151 181 211 241 271 Accuracy on CTB-dev (%) Iteration Number CTB(5K)+PD(100K) CTB(5K)+PD(20K) CTB(5K)+PD(5K) CTB(5K)+PD(1K) Baseline:CTB(5K) Figure 4: Accuracy on CTB-dev with different weighting settings. lowing experiments, which achieves slightly lower accuracy than the complete mapping function, but is much faster. Effect of weighting CTB and PD is investigated in Figure 4 and 5. Since the scale of PD is much larger than CTB, we adopt Algorithm 1 to merge the training data in a certain proportion (N ′ CTB sentences and M′ PD sentences) at each iteration. We use N ′ = 5K, and vary M′ = 1K/5K/20K/100K. Figure 4 shows the accuracy curves on CTB development data. We find that when M′ = 100K, our coupled model achieve very low accuracy, which is even worse than the baseline model. The reason should be that the training instances in CTB are overwhelmed by those in PD when M′ is large. In contrast, when M′ = 1K, the accuracy is also inferior to the case of M′ = 5K, which indicates that PD is not effectively utilized in this setting. Our model works best when M′ = 5K, which is slightly better than the case of M′ = 1K/20K. Figure 5 shows the accuracy curves on PD development data. The baseline model is trained using 100K sentences in one iteration. We find 1788 93.5 94 94.5 95 95.5 96 96.5 97 97.5 1 31 61 91 121 151 181 211 241 271 Accuracy on PD-dev (%) Iteration Number CTB(5K)+PD(100K) CTB(5K)+PD(20K) CTB(5K)+PD(5K) CTB(5K)+PD(1K) Baseline:PD(100K) Figure 5: Accuracy on PD-dev with different weighting settings. that when M′ = 100K, our coupled model achieves similar accuracy with the baseline model. When M′ becomes smaller, our coupled model becomes inferior to the baseline model. Particularly, when M′ = 1K, the model converges very slowly. However, from the trend of the curves, we expect that the accuracy gap between our coupled model with M′ = 5K/20K and the baseline model should be much smaller when reaching convergence. Based on the above observation, we adopt N ′ = 5K and M′ = 5K in the following experiments. Moreover, we select the best iteration on the development data, and use the corresponding model to parse the test data. 5.2 Final Results Table 3 shows the final results on the CTB test data. We re-implement the guide-feature based method of Jiang et al. (2009), referred to as twostage CRF. Li et al. (2012) jointly models Chinese POS tagging and dependency parsing, and report the best tagging accuracy on CTB. The results show that our coupled model outperforms the baseline model by large margin, and also achieves slightly higher accuracy than the guide-feature based method. 5.3 Feature Study We conduct more experiments to measure individual contribution of each feature set, namely the joint features based on bundled tags and separate features based on single-side tags, as defined in Eq. (2). Table 4 shows the results. We can see that when only using separate features, our coupled model achieves only slightly better accuracy than the baseline model. This is because there is Accuracy Baseline CRF 94.10 Two-stage CRF (guide-feature) 94.81 (+0.71) † Coupled CRF 95.00 (+0.90) †‡ Best result (Li et al., 2012) 94.60 Table 3: Final results on CTB test data. † means the corresponding approach significantly outperforms the baseline at confidence level of p < 10−5; whereas ‡ means the accuracy difference between the two-stage CRF and the coupled CRF is significant at confidence level of p < 10−2. dev test Baseline CRF 94.28 94.10 Coupled CRF (w/ separate feat) 94.36 94.43 (+0.33) Coupled CRF (w/ joint feat) 92.92 92.90 (-1.20) Coupled CRF (full) 95.10 95.00 (+0.90) Table 4: Accuracy on CTB: feature study. little connection and help between the two sets annotations. When only using joint features, our coupled model becomes largely inferior to the baseline, which is due to the data sparseness problem for the joint features. However, when the two sets of features are combined, the coupled model largely outperforms the baseline model. These results indicate that both joint features and separate features are indispensable components and complementary to each other for the success of our coupled model. 5.4 Results on Annotation Conversion In this subsection, we evaluate different methods on the annotation conversion task using our newly annotated 1, 000 sentences. The gold-standard PD-to-CTB conversion Baseline CRF 90.59 Two-stage CRF (guide-feature) 93.22 (+2.63) † Coupled CRF 93.90 (+3.31) †‡ Table 5: Conversion accuracy on our annotated data. † means the corresponding approach significantly outperforms the baseline at confidence level of p < 10−5; whereas ‡ means the accuracy difference between the two-stage CRF and the coupled CRF is significant at confidence level of p < 10−2. 1789 dev test Baseline CRF 94.28 94.10 Coupled CRF 95.10 95.00 (+0.90) † Baseline CRF + converted PD 95.01 94.81 (+0.71) †‡ Table 6: Accuracy on CTB: using converted PD. † means the corresponding approach significantly outperforms the baseline at confidence level of p < 10−5; whereas ‡ means the accuracy difference between the coupled CRF and the baseline CRF with converted PD is significant at confidence level of p < 10−2. PD-side tags are provided, and the goal is to obtain the CTB-side tags via annotation conversion. We evaluate accuracy on the 5, 769 words having manually annotated CTB-side tags. Our coupled model can be naturally used for annotation conversion. The idea is to perform constrained decoding on the test data, using the PD-side tags as hard constraints. The guidefeature based method can also perform annotation conversion by using the gold-standard PD-side tags to compose guide features. Table 5 shows the results. The accuracy is much lower than those in Table 3, because the 5, 769 words used for evaluation are 20% most ambiguous tokens in the 1, 000 test sentence (partial annotation to save annotation effort). From Table 5, we can see that our coupled model outperforms both the baseline and guide-feature based methods by large margin. 5.5 Results of Training with Converted Data One weakness of our coupled model is the inefficiency problem due to the large bundled tag set. In practice, we usually only need results following one annotation style. Therefore, we employ our coupled model to convert PD into the style of CTB, and train our baseline model with two training data with homogeneous annotations. Again, Algorithm 1 is used to merge the two data with N ′ = 5K and M′ = 5K. The results are shown in the bottom row in Table 6. We can see that with the extra converted data, the baseline model can achieve slightly lower accuracy with the coupled model and avoid the inefficiency problem at the meantime. 6 Related Work This work is partially inspired by Qiu et al. (2013), who propose a model that performs heterogeneous Chinese word segmentation and POS tagging and produces two sets of results following CTB and PD styles respectively. Different from our CRFbased coupled model, their approach adopts a linear model, which directly combines two separate sets of features based on single-side tags, without considering the interacting joint features between the two annotations. They adopt an approximate decoding algorithm which tries to find the best single-side tag sequence with reference to tags at the other side. In contrast, our approach is a direct extension of traditional CRF, and is more theoretically simple from the perspective of modelling. The use of both joint and separate features is proven to be crucial for the success of our coupled model. In addition, their work indicates that their model relies on a hand-crafted loose mapping between annotations, which is opposite to our findings. The naming of the “coupled” CRF is borrowed from the work of Qiu et al. (2012), which treats the joint task of Chinese word segmentation and POS tagging as two coupled sequence labeling problems. Zhang et al. (2014) propose a shift-reduce dependency parsing model which can simultaneously learn and produce two heterogeneous parse trees. However, their approach assumes the existence of data with annotations at both sides, which is obtained by converting phrase-structure trees into dependency trees with different heuristic rules. This work is also closely related with multitask learning, which aims to jointly learn multiple related tasks with the benefit of using interactive features under a share representation (BenDavid and Schuller, 2003; Ando and Zhang, 2005; Parameswaran and Weinberger, 2010). However, according to our knowledge, multi-task learning typically assumes the existence of data with labels for multiple tasks at the same time, which is unavailable in our situation. As one reviewer kindly pointed out that our model is a factorial CRF (Sutton et al., 2004), in the sense that the bundled tags can be factorized two connected latent variables. Initially, factorial CRFs are designed to jointly model two related (and typically hierarchical) sequential labeling tasks, such as POS tagging and chunking. In this work, our coupled CRF jointly models two same tasks which have different annotation schemes. Moreover, this work provides a natural way to 1790 learn from incomplete annotations where one sentence only contains one-side labels. The reviewer also suggests that our objective can be optimized with the latent variable structured perceptron of Sun et al. (2009), which we leave as future work. Learning with ambiguous labelings are previously explored for classification (Jin and Ghahramani, 2002), sequence labeling (Dredze et al., 2009), parsing (Riezler et al., 2002; T¨ackstr¨om et al., 2013; Li et al., 2014a; Li et al., 2014b). Recently, researchers derive natural annotations from web data, transform them into ambiguous labelings to supervise Chinese word segmentation models (Jiang et al., 2013; Liu et al., 2014; Yang and Vozila, 2014). 7 Conclusions This paper proposes an effective coupled sequence labeling model for exploiting multiple non-overlapping datasets with heterogeneous annotations. Please note that our model can also be naturally trained on datasets with both-side annotations if such data exists. Experimental results demonstrate that our model work better than the baseline and guide-feature based methods on both one-side POS tagging and annotation conversion. Specifically, detailed analysis shows several interesting findings. First, both the separate features and joint features are indispensable components for the success of our coupled model. Second, our coupled model does not rely on a carefully hand-crafted mapping function. Our linguistically motivated mapping function is only used to reduce the size of the bundled tag set for the sake of efficiency. Finally, using the extra training data converted with our coupled model, the baseline tagging model achieves similar accuracy improvement. In this way, we can avoid the inefficiency problem of our coupled model in real application. For future, our immediate plan is to annotate more data with both CTB and PD tags (a few thousand sentences), and to investigate our coupled model with small amount of such annotation as extra training data. Meanwhile, Algorithm 1 is empirically effective in merging two training data, but still needs manual tuning of the weighting factor on held-out data. Thus, we would like to find a more principled and theoretically sound method to merge multiple training data. Acknowledgments The authors would like to thank the undergraduate students Fangli Lu and Xiaojing Wang for building our annotation system, and Le Lu, Die Hu, Yue Zhang, Jian Zhang, Qiuyi Yan, Xinzhou Jiang for data annotation. We are also grateful that Yu Ding kindly shared her earlier codes on which our annotation system was built. We also thank the helpful comments from our anonymous reviewers. This work was supported by National Natural Science Foundation of China (Grant No. 61432013, 61203314) and Jiangsu Planned Projects for Postdoctoral Research Funds (No. 1401075B), and was also partially supported by Collaborative Innovation Center of Novel Software Technology and Industrialization of Jiangsu Province. References Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. Journal of Machine Learn Research, 6:1817–1853. Shai Ben-David and Reba Schuller. 2003. Exploiting task relatedness for multiple task learning. In COLT. Mark Dredze, Partha Pratim Talukdar, and Koby Crammer. 2009. Sequence learning from data with multiple labels. In ECML/PKDD Workshop on Learning from Multi-Label Data. Zhongqiang Huang, Vladimir Eidelman, and Mary Harper. 2009. Improving a simple bigram hmm part-of-speech tagger by latent annotation and selftraining. In Proceedings of NAACL, pages 213–216. Wenbin Jiang, Liang Huang, and Qun Liu. 2009. Automatic adaptation of annotation standards: Chinese word segmentation and POS tagging – a case study. In Proceedings of ACL, pages 522–530. Wenbin Jiang, Meng Sun, Yajuan L¨u, Yating Yang, and Qun Liu. 2013. Discriminative learning with natural annotations: Word segmentation as a case study. In Proceedings of ACL, pages 761–769. Rong Jin and Zoubin Ghahramani. 2002. Learning with multiple labels. In Proceedings of NIPS. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of ICML 2001, pages 282–289. Zhenghua Li, Min Zhang, Wanxiang Che, and Ting Liu. 2012. A separately passive-aggressive training algorithm for joint POS tagging and dependency parsing. In COLING, pages 1681–1698. 1791 Zhenghua Li, Min Zhang, and Wenliang Chen. 2014a. Ambiguity-aware ensemble training for semi-supervised dependency parsing. In ACL, pages 457–467. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014b. Soft cross-lingual syntax projection for dependency parsing. In COLING, pages 783–793. Yijia Liu, Yue Zhang, Wanxiang Che, Ting Liu, and Fan Wu. 2014. Domain adaptation for CRF-based Chinese word segmentation using free annotations. In Proceedings of EMNLP, pages 864–874. Joakim Nivre and Ryan McDonald. 2008. Integrating graph-based and transition-based dependency parsers. In Proceedings of ACL, pages 950–958. Eric W. Noreen. 1989. Computer-intensive methods for testing hypotheses: An introduction. John Wiley & Sons, Inc., New York. S. Parameswaran and K.Q. Weinberger. 2010. Large margin multi-task metric learning. In J. Lafferty, C. K. I. Williams, J. Shawe-Taylor, R.S. Zemel, and A. Culotta, editors, Advances in Neural Information Processing Systems 23, pages 1867–1875. Ciyang Qing, Ulle Endriss, Raquel Fernandez, and Justin Kruger. 2014. Empirical analysis of aggregation methods for collective annotation. In COLING, pages 1533–1542. Xipeng Qiu, Feng Ji, Jiayi Zhao, and Xuanjing Huang. 2012. Joint segmentation and tagging with coupled sequences labeling. In Proceedings of COLING 2012: Posters, pages 951–964, Mumbai, India. Xipeng Qiu, Jiayi Zhao, and Xuanjing Huang. 2013. Joint Chinese word segmentation and POS tagging on heterogeneous annotated corpora with multiple task learning. In Proceedings of EMNLP, pages 658–668. Stefan Riezler, Tracy H. King, Ronald M. Kaplan, Richard Crouch, John T. III Maxwell, and Mark Johnson. 2002. Parsing the wall street journal using a lexical-functional grammar and discriminative estimation techniques. In Proceedings of ACL, pages 271–278. Anders Søgaard. 2011. Semi-supervised condensed nearest neighbor for part-of-speech tagging. In Proceedings of ACL, pages 48–52. Weiwei Sun and Hans Uszkoreit. 2012. Capturing paradigmatic and syntagmatic lexical relations: Towards accurate Chinese part-of-speech tagging. In Proceedings of ACL, pages 242–252. Weiwei Sun and Xiaojun Wan. 2012. Reducing approximation and estimation errors for Chinese lexical processing with heterogeneous annotations. In Proceedings of ACL, pages 232–241. Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, and Jun’ichi Tsujii. 2009. Latent variable perceptron algorithm for structured classification. In Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI 2009), pages 1236–1242. Charles Sutton, Khashayar Rohanimanesh, and Andrew McCallum. 2004. Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data. In International Conference on Machine Learning (ICML). Oscar T¨ackstr¨om, Ryan McDonald, and Joakim Nivre. 2013. Target language adaptation of discriminative transfer parsers. In Proceedings of NAACL, pages 1061–1071. Fei Xia. 2000. The part-of-speech tagging guidelines for the penn Chinese treebank 3.0. In Technical Report, Linguistic Data Consortium. Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. In Natural Language Engineering, volume 11, pages 207–238. Fan Yang and Paul Vozila. 2014. Semi-supervised Chinese word segmentation using partial-label learning with conditional random fields. In Proceedings of EMNLP, pages 90–98. Yue Zhang and Stephen Clark. 2008. Joint word segmentation and POS tagging using a single perceptron. In Proceedings of ACL-08: HLT, pages 888–896. Meishan Zhang, Wanxiang Che, Yanqiu Shao, and Ting Liu. 2014. Jointly or separately: Which is better for parsing heterogeneous dependencies? In Proceedings of COLING, pages 530–540. 1792
2015
172
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1793–1803, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics AutoExtend: Extending Word Embeddings to Embeddings for Synsets and Lexemes Sascha Rothe and Hinrich Sch¨utze Center for Information & Language Processing University of Munich [email protected] Abstract We present AutoExtend, a system to learn embeddings for synsets and lexemes. It is flexible in that it can take any word embeddings as input and does not need an additional training corpus. The synset/lexeme embeddings obtained live in the same vector space as the word embeddings. A sparse tensor formalization guarantees efficiency and parallelizability. We use WordNet as a lexical resource, but AutoExtend can be easily applied to other resources like Freebase. AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation tasks. 1 Introduction Unsupervised methods for word embeddings (also called “distributed word representations”) have become popular in natural language processing (NLP). These methods only need very large corpora as input to create sparse representations (e.g., based on local collocations) and project them into a lower dimensional dense vector space. Examples for word embeddings are SENNA (Collobert and Weston, 2008), the hierarchical log-bilinear model (Mnih and Hinton, 2009), word2vec (Mikolov et al., 2013c) and GloVe (Pennington et al., 2014). However, there are many other resources that are undoubtedly useful in NLP, including lexical resources like WordNet and Wiktionary and knowledge bases like Wikipedia and Freebase. We will simply call these resources in the rest of the paper. Our goal is to enrich these valuable resources with embeddings for those data types that are not words; e.g., we want to enrich WordNet with embeddings for synsets and lexemes. A synset is a set of synonyms that are interchangeable in some context. A lexeme pairs a particular spelling or pronunciation with a particular meaning, i.e., a lexeme is a conjunction of a word and a synset. Our premise is that many NLP applications will benefit if the non-word data types of resources – e.g., synsets in WordNet – are also available as embeddings. For example, in machine translation, enriching and improving translation dictionaries (cf. Mikolov et al. (2013b)) would benefit from these embeddings because they would enable us to create an enriched dictionary for word senses. Generally, our premise is that the arguments for the utility of embeddings for word forms should carry over to the utility of embeddings for other data types like synsets in WordNet. The insight underlying the method we propose is that the constraints of a resource can be formalized as constraints on embeddings and then allow us to extend word embeddings to embeddings of other data types like synsets. For example, the hyponymy relation in WordNet can be formalized as such a constraint. The advantage of our approach is that it decouples embedding learning from the extension of embeddings to non-word data types in a resource. If somebody comes up with a better way of learning embeddings, these embeddings become immediately usable for resources. And we do not rely on any specific properties of embeddings that make them usable in some resources, but not in others. An alternative to our approach is to train embeddings on annotated text, e.g., to train synset embeddings on corpora annotated with synsets. However, successful embedding learning generally requires very large corpora and sense labeling is too expensive to produce corpora of such a size. Another alternative to our approach is to add up all word embedding vectors related to a particular node in a resource; e.g., to create the synset vector of lawsuit in WordNet, we can add the word vectors of the three words that are part of the synset (lawsuit, suit, case). We will call this approach 1793 naive and use it as a baseline (Snaive in Table 3). We will focus on WordNet (Fellbaum, 1998) in this paper, but our method – based on a formalization that exploits the constraints of a resource for extending embeddings from words to other data types – is broadly applicable to other resources including Wikipedia and Freebase. A word in WordNet can be viewed as a composition of several lexemes. Lexemes from different words together can form a synset. When a synset is given, it can be decomposed into its lexemes. And these lexemes then join to form words. These observations are the basis for the formalization of the constraints encoded in WordNet that will be presented in the next section: we view words as the sum of their lexemes and, analogously, synsets as the sum of their lexemes. Another motivation for our formalization stems from the analogy calculus developed by Mikolov et al. (2013a), which can be viewed as a group theory formalization of word relations: we have a set of elements (our vectors) and an operation (addition) satisfying the properties of a mathematical group, in particular, associativity and invertibility. For example, you can take the vector of king, subtract the vector of man and add the vector of woman to get a vector near queen. In other words, you remove the properties of man and add the properties of woman. We can also see the vector of king as the sum of the vector of man and the vector of a gender-neutral ruler. The next thing to notice is that this does not only work for words that combine several properties, but also for words that combine several senses. The vector of suit can be seen as the sum of a vector representing lawsuit and a vector representing business suit. AutoExtend is designed to take word vectors as input and unravel the word vectors to the vectors of their lexemes. The lexeme vectors will then give us the synset vectors. The main contributions of this paper are: (i) We present AutoExtend, a flexible method that extends word embeddings to embeddings of synsets and lexemes. AutoExtend is completely general in that it can be used for any set of embeddings and for any resource that imposes constraints of a certain type on the relationship between words and other data types. (ii) We show that AutoExtend achieves state-of-the-art word similarity and word sense disambiguation (WSD) performance. (iii) We publish the AutoExtend code for extending word embeddings to other data types, the lexeme and synset embeddings and the software to replicate our WSD evaluation. This paper is structured as follows. Section 2 introduces the model, first as a general tensor formulation then as a matrix formulation making additional assumptions. In Section 3, we describe data, experiments and evaluation. We analyze AutoExtend in Section 4 and give a short summary on how to extend our method to other resources in Section 5. Section 6 discusses related work. 2 Model We are looking for a model that extends standard embeddings for words to embeddings for the other two data types in WordNet: synsets and lexemes. We want all three data types – words, lexemes, synsets – to live in the same embedding space. The basic premise of our model is: (i) words are sums of their lexemes and (ii) synsets are sums of their lexemes. We refer to these two premises as synset constraints. For example, the embedding of the word bloom is a sum of the embeddings of its two lexemes bloom(organ) and bloom(period); and the embedding of the synset flower-bloomblossom(organ) is a sum of the embeddings of its three lexemes flower(organ), bloom(organ) and blossom(organ). The synset constraints can be argued to be the simplest possible relationship between the three WordNet data types. They can also be motivated by the way many embeddings are learned from corpora – for example, the counts in vector space models are additive, supporting the view of words as the sum of their senses. The same assumption is frequently made; for example, it underlies the group theory formalization of analogy discussed in Section 1. We denote word vectors as w(i) ∈Rn, synset vectors as s(j) ∈Rn, and lexeme vectors as l(i,j) ∈ Rn. l(i,j) is that lexeme of word w(i) that is a member of synset s(j). We set lexeme vectors l(i,j) that do not exist to zero. For example, the non-existing lexeme flower(truck) is set to zero. We can then formalize our premise that the two constraints (i) and (ii) hold as follows: w(i) = X j l(i,j) (1) s(j) = X i l(i,j) (2) 1794 These two equations are underspecified. We therefore introduce the matrix E(i,j) ∈Rn×n: l(i,j) = E(i,j)w(i) (3) We make the assumption that the dimensions in Eq. 3 are independent of each other, i.e., E(i,j) is a diagonal matrix. Our motivation for this assumption is: (i) This makes the computation technically feasible by significantly reducing the number of parameters and by supporting parallelism. (ii) Treating word embeddings on a per-dimension basis is a frequent design choice (e.g., Kalchbrenner et al. (2014)). Note that we allow E(i,j) < 0 and in general the distribution weights for each dimension (diagonal entries of E(i,j)) will be different. Our assumption can be interpreted as word w(i) distributing its embedding activations to its lexemes on each dimension separately. Therefore, Eqs. 1-2 can be written as follows: w(i) = X j E(i,j)w(i) (4) s(j) = X i E(i,j)w(i) (5) Note that from Eq. 4 it directly follows that: X j E(i,j) = In ∀i (6) with In being the identity matrix. Let W be a |W| × n matrix where n is the dimensionality of the embedding space, |W| is the number of words and each row w(i) is a word embedding; and let S be a |S|×n matrix where |S| is the number of synsets and each row s(j) is a synset embedding. W and S can be interpreted as linear maps and a mapping between them is given by the rank 4 tensor E ∈R|S|×n×|W|×n. We can then write Eq. 5 as a tensor product: S = E ⊗W (7) while Eq. 6 states, that X j Ei,d1 j,d2 = 1 ∀i, d1, d2 (8) Additionally, there is no interaction between different dimensions, so Ei,d1 j,d2 = 0 if d1 ̸= d2. In other words, we are creating the tensor by stacking the diagonal matrices E(i,j) over i and j. Another sparsity arises from the fact that many lexemes do not exist: Ei,d1 j,d2 = 0 if l(i,j) = 0; i.e., l(i,j) ̸= 0 only if word i has a lexeme that is a member of synset j. To summarize the sparsity: Ei,d1 j,d2 = 0 ⇐d1 ̸= d2 ∨l(i,j) = 0 (9) 2.1 Learning We adopt an autoencoding framework to learn embeddings for lexemes and synsets. To this end, we view the tensor equation S = E ⊗W as the encoding part of the autoencoder: the synsets are the encoding of the words. We define a corresponding decoding part that decodes the synsets into words as follows: s(j) = X i l (i,j), w(i) = X j l (i,j) (10) In analogy to E(i,j), we introduce the diagonal matrix D(j,i): l (i,j) = D(j,i)s(j) (11) In this case, it is the synset that distributes itself to its lexemes. We can then rewrite Eq. 10 to: s(j) = X i D(j,i)s(j), w(i) = X j D(j,i)s(j) (12) and we also get the equivalent of Eq. 6 for D(j,i): X i D(j,i) = In ∀j (13) and in tensor notation: W = D ⊗S (14) Normalization and sparseness properties for the decoding part are analogous to the encoding part: X i Dj,d2 i,d1 = 1 ∀j, d1, d2 (15) Dj,d2 i,d1 = 0 ⇐d1 ̸= d2 ∨l(i,j) = 0 (16) We can state the learning objective of the autoencoder as follows: argmin E,D ∥D ⊗E ⊗W −W∥ (17) under the conditions Eq. 8, 9, 15 and 16. Now we have an autoencoder where input and output layers are the word embeddings and the hidden layer represents the synset vectors. A simplified version is shown in Figure 1. The tensors E 1795 and D have to be learned. They are rank 4 tensors of size ≈1015. However, we already discussed that they are very sparse, for two reasons: (i) We make the assumption that there is no interaction between dimensions. (ii) There are only few interactions between words and synsets (only when a lexeme exists). In practice, there are only ≈107 elements to learn, which is technically feasible. 2.2 Matrix formalization Based on the assumption that each dimension is fully independent from other dimensions, a separate autoencoder for each dimension can be created and trained in parallel. Let W ∈R|W|×n be a matrix where each row is a word embedding and w(d) = W·,d the d-th column of W, i.e., a vector that holds the d-th dimension of each word vector. In the same way, s(d) = S·,d holds the d-th dimension of each synset vector and E(d) = E·,d ·,d ∈ R|S|×|W|. We can write S = E ⊗W as: s(d) = E(d)w(d) ∀d (18) with E(d) i,j = 0 if l(i,j) = 0. The decoding equation W = D ⊗S takes this form: w(d) = D(d)s(d) ∀d (19) where D(d) = D·,d ·,d ∈R|W|×|S| and D(d) j,i = 0 if l(i,j) = 0. So E and D are symmetric in terms of non-zero elements. The learning objective becomes: argmin E(d),D(d)∥D(d)E(d)w(d) −w(d)∥ ∀d (20) 2.3 Lexeme embeddings The hidden layer S of the autoencoder gives us synset embeddings. The lexeme embeddings are defined when transitioning from W to S, or more explicitly by: l(i,j) = E(i,j)w(i) (21) However, there is also a second lexeme embedding in AutoExtend when transitioning form S to W: l (i,j) = D(j,i)s(j) (22) Aligning these two representations seems natural, so we impose the following lexeme constraints: argmin E(i,j),D(j,i) E(i,j)w(i) −D(j,i)s(j) ∀i, j (23) noun verb adj adv hypernymy 84,505 13,256 0 0 antonymy 2,154 1,093 4,024 712 similarity 0 0 21,434 0 verb group 0 1,744 0 0 Table 1: # of WN relations by part-of-speech This can also be expressed dimension-wise. The matrix formulation is given by: argmin E(d),D(d) E(d) diag(w(d)) − D(d) diag(s(d)) T ∀d (24) with diag(x) being a square matrix having x on the main diagonal and vector s(d) defined by Eq. 18. While we try to align the embeddings, there are still two different lexeme embeddings. In all experiments reported in Section 4 we will use the average of both embeddings and in Section 4 we will analyze the weighting in more detail. 2.4 WN relations Some WordNet synsets contain only a single word (lexeme). The autoencoder learns based on the synset constraints, i.e., lexemes being shared by different synsets (and also words); thus, it is difficult to learn good embeddings for single-lexeme synsets. To remedy this problem, we impose the constraint that synsets related by WordNet (WN) relations should have similar embeddings. Table 1 shows relations we used. WN relations are entered in a new matrix R ∈Rr×|S|, where r is the number of WN relation tuples. For each relation tuple, i.e., row in R, we set the columns corresponding to the first and second synset to 1 and −1, respectively. The values of R are not updated during training. We use a squared error function and 0 as target value. This forces the system to find similar values for related synsets. Formally, the WN relation constraints are: argmin E(d) ∥RE(d)w(d)∥ ∀d (25) 2.5 Implementation Our training objective is minimization of the sum of synset constraints (Eq. 20), weighted by α, the lexeme constraints (Eq. 24), weighted by β, and the WN relation constraints (Eq. 25), weighted by 1 −α −β. The training objective cannot be solved analytically because it is subject to constraints Eq. 8, 1796 L/suit (textil) S/suit-of-clothes L/suit (textil) W/suit L/suit (law) L/suit (law) W/suit W/case L/case S/lawsuit L/case W/case W/lawsuit L/lawsuit L/lawsuit W/lawsuit Figure 1: A small subgraph of WordNet. The circles are intended to show four different embedding dimensions. These dimensions are treated as independent. The synset constraints align the input and the output layer. The lexeme constraints align the second and fourth layers. Eq. 9, Eq. 15 and Eq. 16. We therefore use backpropagation. We do not use regularization since we found that all learned weights are in [−2, 2]. AutoExtend is implemented in MATLAB. We run 1000 iterations of gradient descent. On an Intel Xeon CPU E7-8857 v2 3.00GHz, one iteration on one dimension takes less than a minute because the gradient computation ignores zero entries in the matrix. 2.6 Column normalization Our model is based on the premise that a word is the sum of its lexemes (Eq. 1). From the definition of E(i,j), we derived that E ∈R|S|×n×|W|×n is normalized over the first dimension (Eq. 8). So E(d) ∈R|S|×|W| is also normalized over the first dimension. In other words, E(d) is a column normalized matrix. Another premise of the model is that a synset is the sum of its lexemes. Therefore, D(d) is also column normalized. A simple way to implement this is to start the computation with column normalized matrices and normalize them again after each iteration as long as the error function still decreases. When the error function starts increasing, we stop normalizing the matrices and continue with a normal gradient descent. This respects that while E(d) and D(d) should be column normalized in theory, there are a lot of practical issues that prevent this, e.g., OOV words. 3 Data, experiments and evaluation We downloaded 300-dimensional embeddings for 3,000,000 words and phrases trained on Google News, a corpus of ≈1011 tokens, using word2vec CBOW (Mikolov et al., 2013c). Many words in the word2vec vocabulary are not in WordNet, e.g., inflected forms (cars) and proper nouns (Tony Blair). Conversely, many WordNet lemmas are not in the word2vec vocabulary, e.g., 42 (digits were converted to 0). This results in a number of empty synsets (see Table 2). Note however that AutoExtend can produce embeddings for empty synsets because we use WN relation constraints in addition to synset and lexeme constraints. We run AutoExtend on the word2vec vectors. As we do not know anything about a suitable weighting for the three different constraints, we set α = β = 0.33. Our main goal is to produce compatible embeddings for lexemes and synsets. Thus, we can compute nearest neighbors across all three data types as shown in Figure 2. We evaluate the embeddings on WSD and on similarity performance. Our results depend directly on the quality of the underlying word embeddings, in our case word2vec embeddings. We would expect even better evaluation results as word representation learning methods improve. Using a new and improved set of underlying embeddings is simple: it is a simple switch of the input file that contains the word embeddings. 3.1 Word Sense Disambiguation For WSD we use the shared tasks of Senseval2 (Kilgarriff, 2001) and Senseval-3 (Mihalcea et al., 2004) and a system named IMS (Zhong and WordNet ∩word2vec words 147,478 54,570 synsets 117,791 73,844 lexemes 207,272 106,167 Table 2: # of items in WordNet and after intersection with word2vec vectors 1797 nearest neighbors of W/suit S/suit (businessman), L/suit (businessman), L/accomodate, S/suit (be acceptable), L/suit (be acceptable), L/lawsuit, W/lawsuit, S/suit (playing card), L/suit (playing card), S/suit (petition), S/lawsuit, W/countersuit, W/complaint, W/counterclaim nearest neighbors of W/lawsuit L/lawsuit, S/lawsuit, S/countersuit, L/countersuit, W/countersuit, W/suit, W/counterclaim, S/counterclaim (n), L/counterclaim (n), S/counterclaim (v), L/counterclaim (v), W/sue, S/sue (n), L/sue (n) nearest neighbors of S/suit-of-clothes L/suit-of-clothes, S/zoot-suit, L/zoot-suit, W/zoot-suit, S/garment, L/garment, S/dress, S/trousers, L/pinstripe, L/shirt, W/tuxedo, W/gabardine, W/tux, W/pinstripe Figure 2: Five nearest word (W/), lexeme (L/) and synset (S/) neighbors for three items, ordered by cosine Ng, 2010). Senseval-2 contains 139, Senseval-3 57 different words. They provide 8,611, respectively 8,022 training instances and 4,328, respectively 3,944 test instances. For the system, we use the same setting as in the original paper. Preprocessing consists of sentence splitting, tokenization, POS tagging and lemmatization; the classifier is a linear SVM. In our experiments (Table 3), we run IMS with each feature set by itself to assess the relative strengths of feature sets (lines 1– 7) and on feature set combinations to determine which combination is best for WSD (lines 8, 12– 15). IMS implements three standard WSD feature sets: part of speech (POS), surrounding word and local collocation (lines 1–3). Let w be an ambiguous word with k senses. The three feature sets on lines 5–7 are based on the AutoExtend embeddings s(j), 1 ≤j ≤k, of the synsets of w and the centroid c of the sentence in which w occurs. The centroid is simply the sum of all word2vec vectors of the words in the sentence, excluding stop words. The S-cosine feature set consists of the k cosines of centroid and synset vectors: < cos(c, s(1)), cos(c, s(2)), . . . , cos(c, s(k)) > The S-product feature set consists of the nk element-wise products of centroid and synset vectors: < c1s(1) 1 , . . . , cns(1) n , . . . , c1s(k) 1 , . . . , cns(k) n > where ci (resp. s(j) i ) is element i of c (resp. s(j)). The idea is that we let the SVM estimate how important each dimension is for WSD instead of giving all equal weight as in S-cosine. The S-raw feature set simply consists of the n(k + 1) elements of centroid and synset vectors: < c1, . . . , cn, s(1) 1 , . . . , s(1) n , . . . , s(k) 1 , . . . , s(k) n > Our main goal is to determine if AutoExtend features improve WSD performance when added to standard WSD features. To make sure that improvements we get are not solely due to the power of word2vec, we also investigate a simple word2vec baseline. For S-product, the AutoExtend feature set that performs best in the experiment (cf. lines 6 and 14), we test the alternative word2vec-based Snaive-product feature set. It has the same definition as S-product except that we replace the synset vectors s(j) with naive synset vectors z(j), defined as the sum of the word2vec vectors of the words that are members of synset j. Lines 1–7 in Table 3 show the performance of each feature set by itself. We see that the synset feature sets (lines 5–7) have a comparable performance to standard feature sets. S-product is the strongest of them. Lines 8–16 show the performance of different feature set combinations. MFS (line 8) is the most frequent sense baseline. Lines 9&10 are the winners of Senseval. The standard configuration of IMS (line 11) uses the three feature sets on lines 1–3 (POS, surrounding word, local collocation) and achieves an accuracy of 65.2% on the English lexical sample task of Senseval-2 and 72.3% on Senseval-3.1 Lines 12–16 add one additional feature set to the IMS system on line 11; e.g., the system on line 14 uses POS, surrounding word, local collocation and S-product feature sets. The system on line 14 outperforms all previous systems, most of them significantly. While S-raw performs quite reasonably as a feature set alone, it hurts the performance when used as an additional feature set. As this is the feature set that contains the largest number of features (n(k + 1)), overfitting is the likely reason. Conversely, S-cosine only adds k features and therefore may suffer from underfitting.† We do a grid search (step size .1) for optimal values of α and β, optimizing the average score of Senseval-2 and Senseval-3. The best performing feature set combination is Soptimized-product with 1Zhong and Ng (2010) report accuracies of 65.3% / 72.6% for this configuration. †In Table 3 and Table 4, results significantly worse than the best (bold) result in each column are marked † for α = .05 and ‡ for α = .10 (one-tailed Z-test). 1798 Senseval-2 Senseval-3 IMS feature sets 1 POS 53.6 58.0† 2 surrounding word 57.6 65.3† 3 local collocation 58.7 64.7† 4 Snaive-product 56.5 62.2† 5 S-cosine 55.5 60.5† 6 S-product 58.3 64.3† 7 S-raw 56.8 63.1† system comparison 8 MFS 47.6† 55.2† 9 Rank 1 system 64.2† 72.9† 10 Rank 2 system 63.8† 72.6† 11 IMS 65.2‡ 72.3‡ 12 IMS + Snaive-prod. 62.6† 69.4† 13 IMS + S-cosine 65.1‡ 72.4‡ 14 IMS + S-product 66.5 73.6† 15 IMS + S-raw 62.1† 66.8† 16 IMS + Soptimized-prod. 66.6 73.6† Table 3: WSD accuracy for different feature sets and systems. Best result (excluding line 16) in each column in bold. α = 0.2 and β = 0.5, with only a small improvement (line 16). The main result of this experiment is that we achieve an improvement of more than 1% in WSD performance when using AutoExtend. 3.2 Synset and lexeme similarity We use SCWS (Huang et al., 2012) for the similarity evaluation. SCWS provides not only isolated words and corresponding similarity scores, but also a context for each word. SCWS is based on WordNet, but the information as to which synset a word in context came from is not available. However, the dataset is the closest we could find for sense similarity. Synset and lexeme embeddings are obtained by running AutoExtend. Based on the results of the WSD task, we set α = 0.2 and β = 0.5. Lexeme embeddings are the natural choice for this task as human subjects are provided with two words and a context for each and then have to assign a similarity score. But for completeness, we also run experiments for synsets. For each word, we compute a context vector c by adding all word vectors of the context, excluding the test word itself. Following Reisinger and Mooney (2010), we compute the lexeme (resp. synset) vector l either as the simple average of the lexeme (resp. synset) vectors l(ij) (method AvgSim, no dependence on c in this case) or as the average of the lexeme (resp. synset) vectors weighted by cosine similarity to c (method AvgSimC). Table 4 shows that AutoExtend lexeme embeddings (line 7) perform better than previous work, AvgSim AvgSimC 1 Huang et al. (2012) 62.8† 65.7† 2 Tian et al. (2014) – 65.4† 3 Neelakantan et al. (2014) 67.2† 69.3† 4 Chen et al. (2014) 66.2† 68.9† 5 words (word2vec) 66.6‡ 66.6† 6 synsets 62.6† 63.7† 7 lexemes 68.9† 69.8† Table 4: Spearman correlation (ρ × 100) on SCWS. Best result per column in bold. including (Huang et al., 2012) and (Tian et al., 2014). Lexeme embeddings perform better than synset embeddings (lines 7 vs. 6), presumably because using a representation that is specific to the actual word being judged is more precise than using a representation that also includes synonyms. A simple baseline is to use the underlying word2vec embeddings directly (line 5). In this case, there is only one embedding, so there is no difference between AvgSim and AvgSimC. It is interesting that even if we do not take the context into account (method AvgSim) the lexeme embeddings outperform the original word embeddings. As AvgSim simply adds up all lexemes of a word, this is equivalent to the constraint we proposed in the beginning of the paper (Eq. 1). Thus, replacing a word’s embedding by the sum of the embeddings of its senses could generally improve the quality of embeddings (cf. Huang et al. (2012) for a similar point). We will leave a deeper evaluation of this topic for future work. 4 Analysis We first look at the impact of the parameters α, β (Section 2.5) that control the weighting of synset constraints vs lexeme constraints vs WN relation constraints. We investigate the impact for three different tasks. WSD-alone: accuracy of IMS (average of Senseval-2 and Senseval-3) if only Sproduct is used as a feature set (line 6 in Table 3). WSD-additional: accuracy of IMS (average of Senseval-2 and Senseval-3) if S-product is used together with the feature sets POS, surrounding word and local collocation (line 14 in Table 3). SCWS: Spearman correlation on SCWS (line 7 in Table 4). For WSD-alone (Figure 3, center), the best performing weightings (red) all have high weights for WN relations and are therefore at the top of triangle. Thus, WN relations are very important for WSD-alone and adding more weight to the 1799 synset and lexeme constraints does not help. However, all three constraints are important in WSDadditional: the red area is in the middle (corresponding to nonzero weights for all three constraints) in the left panel of Figure 3. Apparently, strongly weighted lexeme and synset constraints enable learning of representations that in their interaction with standard WSD feature sets like local collocation increase WSD performance. For SCWS (right panel), we should not put too much weight on WN relations as they artificially bring related, but not similar lexemes together. So the maximum for this task is located in the lower part of the triangle. The main result of this analysis is that AutoExtend never achieves its maximum performance when using only one set of constraints. All three constraints are important – synset, lexeme and WN relation constraints – with different weights for different applications. We also analyzed the impact of the four different WN relations (see Table 1) on performance. In Table 3 and Table 4, all four WN relations are used together. We found that any combination of three relation types performs worse than using all four together. A comparison of different relations must be done carefully as they differ in the POS they affect and in quantity (see Table 1). In general, relation types with more relations outperformed relation types with fewer relations. Finally, the relative weighting of l(i,j) and l (i,j) when computing lexeme embeddings is also a parameter that can be tuned. We use simple averaging (θ = 0.5) for all experiments reported in this paper. We found only small changes in performance for 0.2 ≤θ ≤0.8. 5 Resources other than WordNet AutoExtend is broadly applicable to lexical and knowledge resources that have certain properties. While we only run experiments with WordNet in this paper, we will briefly address other resources. For Freebase (Bollacker et al., 2008), we could replace the synsets with Freebase entities. Each entity has several aliases, e.g. Barack Obama, President Obama, Obama. The role of words in WordNet would correspond to these aliases in Freebase. This will give us the synset constraint, as well as the lexeme constraint of the system. Relations are given by Freebase types; e.g., we can add a constraint that entity embeddings of the type ”President of the US” should be similar. To explorer multilingual word embeddings we require the word embeddings of different languages to live in the same vector space, which can easily be achieved by training a transformation matrix L between two languages using known translations (Mikolov et al., 2013b). Let X be a matrix where each row is a word embedding in language 1 and Y a matrix where each row is a word embedding in language 2. For each row the words of X and Y are a translation of each other. We then want to minimize the following objective: argmin L ∥LX −Y ∥ (26) We can use a gradient descent to solve this but a matrix inversion will run faster. The matrix L is given by: L = (XT ∗X)−1(XT ∗Y ) (27) The matrix L can be used to transform unknown embeddings into the new vector space, which enables us to use a multilingual WordNet like BabelNet (Navigli and Ponzetto, 2010) to compute synset embeddings. We can add cross-linguistic relationships to our model, e.g., aligning German and English synset embeddings of the same concept. 6 Related Work Rumelhart et al. (1988) introduced distributed word representations, usually called word embeddings today. There has been a resurgence of work on them recently (e.g., Bengio et al. (2003) Mnih and Hinton (2007), Collobert et al. (2011), Mikolov et al. (2013a), Pennington et al. (2014)). These models produce only a single embedding for each word. All of them can be used as input for AutoExtend. There are several approaches to finding embeddings for senses, variously called meaning, sense and multiple word embeddings. Sch¨utze (1998) created sense representations by clustering context representations derived from co-occurrence. The representation of a sense is simply the centroid of its cluster. Huang et al. (2012) improved this by learning single-prototype embeddings before performing word sense discrimination on them. Bordes et al. (2011) created similarity measures for relations in WordNet and Freebase to learn entity embeddings. An energy based model was 1800 WSD-additional WSD-alone SCWS WN relations lexemes synsets Figure 3: Performance of different weightings of the three constraints (WN relations:top, lexemes:left, synsets:right) on the three tasks WSD-additional, WSD-alone and SCWS. “x” indicates the maximum; “o” indicates a local minimum. proposed by Bordes et al. (2012) to create disambiguated meaning embeddings and Neelakantan et al. (2014) and Tian et al. (2014) extended the Skip-gram model (Mikolov et al., 2013a) to learn multiple word embeddings. While these embeddings can correspond to different word senses, there is no clear mapping between them and a lexical resource like WordNet. Chen et al. (2014) also modified word2vec to learn sense embeddings, each corresponding to a WordNet synset. They use glosses to initialize sense embedding, which in turn can be used for WSD. The sense disambiguated data can again be used to improve sense embeddings. This prior work needs a training step to learn embeddings. In contrast, we can “AutoExtend” any set of given word embeddings – without (re)training them. There is only little work on taking existing word embeddings and producing embeddings in the same space. Labutov and Lipson (2013) tuned existing word embeddings in supervised training, not to create new embeddings for senses or entities, but to get better predictive performance on a task while not changing the space of embeddings. Lexical resources have also been used to improve word embeddings. In the Relation Constrained Model, Yu and Dredze (2014) use word2vec to learn embeddings that are optimized to predict a related word in the resource, with good evaluation results. Bian et al. (2014) used not only semantic, but also morphological and syntactic knowledge to compute more effective word embeddings. Another interesting approach to create sense specific word embeddings uses bilingual resources (Guo et al., 2014). The downside of this approach is that parallel data is needed. We used the SCWS dataset for the word similarity task, as it provides a context. Other frequently used datasets are WordSim-353 (Finkelstein et al., 2001) or MEN (Bruni et al., 2014). And while we use cosine to compute similarity between synsets, there are also a lot of similarity measures that only rely on a given resource, mostly WordNet. These measures are often functions that depend on the provided information like gloss or the topology like shortest-path. Examples include (Wu and Palmer, 1994) and (Leacock and Chodorow, 1998); Blanchard et al. (2005) give a good overview. 7 Conclusion We presented AutoExtend, a flexible method to learn synset and lexeme embeddings from word embeddings. It is completely general and can be used for any other set of embeddings and for any other resource that imposes constraints of a certain type on the relationship between words and other data types. Our experimental results show that AutoExtend achieves state-of-the-art performance on word similarity and word sense disambiguation. Along with this paper, we will publish AutoExtend for extending word embeddings to other data types; the lexeme and synset embeddings used in the experiments; and the code needed to replicate our WSD evaluation2. Acknowledgments This work was partially funded by Deutsche Forschungsgemeinschaft (DFG SCHU 2246/2-2). We are grateful to Christiane Fellbaum for discussions leading up to this paper and to the anonymous reviewers for their comments. 2http://cistern.cis.lmu.de/ 1801 References Yoshua Bengio, Rejean Ducharme, and Pascal Vincent. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Jiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embedding. In Proceedings of ECML PKDD. Emmanuel Blanchard, Mounira Harzallah, Henri Briand, and Pascale Kuntz. 2005. A typology of ontology-based semantic measures. In Proceedings of EMOI - INTEROP. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of ACM SIGMOD. Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. 2011. Learning structured embeddings of knowledge bases. In Proceedings of AAAI. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In Proceedings of AISTATS. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49(1):1–47. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of WWW. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning sense-specific word embeddings by exploiting bilingual resources. In Proceedings of Coling, Technical Papers. Eric H Huang, Richard Socher, Christopher D Manning, and Andrew Y Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of ACL. Adam Kilgarriff. 2001. English lexical sample task description. In Proceedings of SENSEVAL-2. Igor Labutov and Hod Lipson. 2013. Re-embedding words. In Proceedings of ACL. Claudia Leacock and Martin Chodorow. 1998. Combining local context and wordnet similarity for word sense identification. WordNet: An electronic lexical database, 49(2):265–283. Rada Mihalcea, Timothy Chklovski, and Adam Kilgarriff. 2004. The senseval-3 english lexical sample task. In Proceedings of SENSEVAL-3. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013c. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS. George A Miller and Walter G Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. Andriy Mnih and Geoffrey Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of ICML. Andriy Mnih and Geoffrey E Hinton. 2009. A scalable hierarchical distributed language model. In Proceedings of NIPS. Roberto Navigli and Simone Paolo Ponzetto. 2010. Babelnet: Building a very large multilingual semantic network. In Proceedings of ACL. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of EMNLP. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proceedings of NAACL. Herbert Rubenstein and John B Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. 1802 David E Rumelhart, Geoffrey E Hinton, and Ronald J Williams. 1988. Learning representations by backpropagating errors. Cognitive Modeling, 5:213–220. Hinrich Sch¨utze. 1998. Automatic word sense discrimination. Computational Linguistics, 24(1):97– 123. Fei Tian, Hanjun Dai, Jiang Bian, Bin Gao, Rui Zhang, Enhong Chen, and Tie-Yan Liu. 2014. A probabilistic model for learning multi-prototype word embeddings. In Proceedings of Coling, Technical Papers. Zhibiao Wu and Martha Palmer. 1994. Verbs semantics and lexical selection. In Proceedings of ACL. Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of ACL. Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proceedings of ACL, System Demonstrations. 1803
2015
173
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 1804–1813, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Improving Evaluation of Machine Translation Quality Estimation Yvette Graham ADAPT Centre School of Computer Science and Statistics Trinity College Dublin [email protected] Abstract Quality estimation evaluation commonly takes the form of measurement of the error that exists between predictions and gold standard labels for a particular test set of translations. Issues can arise during comparison of quality estimation prediction score distributions and gold label distributions, however. In this paper, we provide an analysis of methods of comparison and identify areas of concern with respect to widely used measures, such as the ability to gain by prediction of aggregate statistics specific to gold label distributions or by optimally conservative variance in prediction score distributions. As an alternative, we propose the use of the unit-free Pearson correlation, in addition to providing an appropriate method of significance testing improvements over a baseline. Components of WMT-13 and WMT-14 quality estimation shared tasks are replicated to reveal substantially increased conclusivity in system rankings, including identification of outright winners of tasks. 1 Introduction Machine Translation (MT) Quality Estimation (QE) is the automatic prediction of machine translation quality without the use of reference translations (Blatz et al., 2004; Specia et al., 2009). Human assessment of translation quality in theory provides the most meaningful evaluation of systems, but human assessors are known to be inconsistent and this causes challenges for quality estimation evaluation. For instance, there is a general lack of consensus both with respect to what provides the most meaningful gold standard representation, as well as best method of comparison of gold labels and system predictions. For example, in the 2014 Workshop on Statistical Machine Translation (WMT), which since 2012 has provided a main venue for evaluation of systems, sentence-level systems were evaluated with respect to three distinct gold standard representations and each of those compared to predictions using four different measures, resulting in a total of 12 different system rankings, 6 identified as official rankings (Bojar et al., 2014). Although the aim of several methods of evaluation is to provide more insight into performance of systems, this also produces conflicting results and raises the question which method of evaluation really identifies the system(s) or method(s) that best predicts translation quality. For example, an extreme case in WMT-14 occurred for sentence-level quality estimation for English-to-Spanish. In each of the 12 system rankings, many systems were tied and this resulted in a total of 22 official winning systems for this language pair. Besides leaving potential users of quality estimation systems at a loss as to what the best system may be, a large number of inconclusive evaluation methodologies is also likely to lead to confusion about which evaluation methods should be applied in general in QE research, or worse still, researchers simply choosing the methodology that favors their system from among the many different methodologies. In this paper, we provide an analysis of each of the methodologies used in WMT and widely applied to evaluation of quality estimation systems in general. Our analysis reveals potential flaws in existing methods and we subsequently provide detail of a single method that overcomes previous challenges. To demonstrate, we replicate com1804 ponents of evaluations previously carried out at WMT-13 and WMT-14 sentence-level quality estimation shared tasks. Results reveal substantially more conclusive system rankings, revealing outright winners that had not previously been identified. 2 Relevant Work The Workshop on Statistical Machine Translation (WMT) provides a main venue for evaluation of quality estimation systems, in addition to the rare and highly-valued effort of provision of publicly available data sets to facilitate further research. We provide an analysis of current evaluation methodologies applied not only in the most recent WMT shared task but also widely within quality estimation research. 2.1 WMT-style Evaluation WMT-14 quality estimation evaluation at the sentence-level, Task 1, is comprised of three subtasks. In Task 1.1, human gold labels comprise three levels of translation quality or “perceived post-edit effort” (1 = perfect translation; 2 = near miss translation; 3 = very low quality translation). A possible downside of the evaluation methodology applied in Task 1.1 is firstly that the gold standard representation may be overly coarse-grained. Considering the vast range of possible errors occurring in translations, limiting the levels of translation quality to only three may impact negatively on systems’ ability to discriminate between translations of various quality. More importantly, however, the combination of such coarse-grained gold labels (1, 2 or 3) and comparison of gold labels and system predictions by mean absolute error (MAE) has a counterintuitive effect on system rankings, as systems that produce continuous predictions are at an advantage over those that produce discrete predictions even though gold labels are also discrete. Figure 1(a) shows discrete gold label distributions for the scoring variant of Task 1.1 in WMT-14 and Figure 1(b) prediction distributions for an example system that was at a disadvantage because it restricted its predictions to discrete ratings like those of gold labels, and Figure 1(c) a system that achieves apparent better performance (lower MAE) despite prediction representations mismatching the discrete nature of gold labels. Evaluation of the ranking variant of Task 1.1 r Post-edit Time 0.36 Post-edit Rate 0.69∗∗∗ Table 1: Pearson correlation with HTER scores of post-edit times (PETs) and post-edit rates (PERs) for WMT-14 Task 1.2 and Task 1.3 gold labels, correlation marked with ∗∗∗is significantly greater at p < 0.001. again includes a significant mismatch between representations used as gold labels, which again were limited to the ratings 1, 2 or 3, while systems were required to provide a total-order ranking of test set translations, for example ranks 1600 or 1-450, depending on language pair. Evaluation methodologies applied to ranking tasks may be better facilitated by application of more finegrained gold standard labels that more closely represent total-order rankings of system predictions. Evaluation methodologies applied in Task 1.3 employ the more fine-grained post-edit times (PETs) as translation quality gold labels. PETs potentially provide a good indication of the underlying quality of translations, as a translation that takes longer to manually correct is thought to have lower quality. However, we propose what may correspond more directly to translation quality is an alteration of this, a post-edit rate (PER), where PETs are normalized by the number of words in translations. This takes into account the fact that, all else being equal, longer translations simply take a greater amount of time to post-edit than shorter ones. To investigate to what degree PERs may correspond better to translation quality than PETs, we compute correlations of each with HTER gold labels of translations from Task 1.2. Table 6 reveals a significantly higher correlation that exists between PER and HTER compared to PET and HTER (p < 0.001) , and we conclude therefore that the PER of a translation provides a more faithful representation of translation quality than PET, and convert PETs for both predictions and gold labels to PERs (in seconds per word) in our later replication of Task 1.3. In Task 1.2 of WMT-14, gold standard labels used to evaluate systems were in the form of human translation error rates (HTERs) (Snover et al., 2009). HTER scores provide an effective representation for evaluation of quality estima1805 1 2 3 (a) Gold Labels 0.0 0.2 0.4 0.6 0.8 1.0 1 2 3 (b) System A Predictions 0.0 0.2 0.4 0.6 0.8 1.0 MAE: 0.687 0.5 1.0 1.5 2.0 2.5 3.0 3.5 0.0 0.2 0.4 0.6 0.8 1.0 (c) System B Predictions MAE: 0.603 Figure 1: WMT-14 English-to-German quality estimation Task 1.1 where mismatched prediction/gold labels achieves apparent better performance, where (a) gold label distribution; (b) example system disadvantaged by its discrete predictions; (c) example system gaining advantage by its continuous predictions. tion systems, as scores are individually computed per translation using custom post-edited reference translations, avoiding the bias that can occur with metrics that employ generic reference translations. In our later evaluation, we therefore use HTER scores in addition to PERs as suitable gold standard labels. 2.2 Mean Absolute Error Mean absolute error is likely the most widely applied comparison measure of quality estimation system predictions and gold labels, in addition to being the official measure applied to scoring variants of tasks in WMT (Bojar et al., 2014). MAE is the average absolute difference that exists between a system’s predictions and gold standard labels for translations, and a system achieving a lower MAE is considered a better system. Significant issues arise for evaluation of quality estimation systems with MAE when comparing distributions for predictions and gold labels, however. Firstly, a system’s MAE can be lowered not only by individual predictions closer to corresponding gold labels, but also by prediction of aggregate statistics specific to the distribution of gold labels in the particular test set used for evaluation. MAE is most susceptible in this respect when gold labels have a unimodal distribution with relatively low standard deviation. For example, Figure 2(a) shows test set gold label HTER distribution for Task 1.2 in WMT-14 where the bulk of HTERs are located around one main peak with relatively low variance in the distribution. Unfortunately with MAE, a system that correctly predicts the location of the mode of the test set gold distribution and centers predictions around it with an optimally conservative variance can achieve lower MAE and apparent better performance. Figure 2(b) shows a lower MAE can be achieved by rescaling the original prediction distribution for an example system to a distribution with lower variance. A disadvantage of an ability to gain in performance by prediction of such features of a given test set is that prediction of aggregates is, in general, far easier than individual predictions. In addition, inclusion of confounding test set aggregates such as these in evaluations will likely lead to both an overestimate of the ability of some systems to predict the quality of unseen translations and an underestimate of the accuracy of systems that courageously attempt to predict the quality of translations in the tails of gold distributions, and it follows that systems optimized for MAE can be expected to perform badly when predicting the quality of translations in the tails of gold label distributions (Moreau and Vogel, 2014). Table 2 shows how MAEs of original predicted score distributions for all systems participating in Task 1.2 WMT-14 can be reduced by shifting and rescaling the prediction score distribution according to gold label aggregates. Table 3 shows that for similar reasons other measures commonly applied to evaluation of quality estimation systems, such as root mean squared error (RMSE), that are also not unit-free, encounter the same problem. 2.3 Significance Testing In quality estimation, it is common to apply bootstrap resampling to assess the likelihood that a decrease in MAE (an improvement) has occurred by chance. In contrast to other areas of MT, where the accuracy of randomized methods of signifi1806 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 (a) Original Predictions HTER MAE: 0.1504 Multilizer Gold 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 (b) Rescaled Predictions HTER MAE: 0.1416 Rescaled Multilizer Gold Figure 2: Comparison of example system from WMT-14 English-to-Spanish Task 1.2 (a) original prediction distribution and gold labels and (b) the same when the prediction distribution is rescaled to half its original standard deviation, showing a lower MAE can be achieved by reducing the variance in prediction distributions. Original Rescaled MAE MAE FBK-UPV-UEDIN-wp 0.129 0.125 DCU-rtm-svr 0.134 0.127 USHEFF 0.136 0.133 DCU-rtm-tree 0.140 0.129 DFKI-svr 0.143 0.132 FBK-UPV-UEDIN-nowp 0.144 0.137 SHEFF-lite-sparse 0.150 0.141 Multilizer 0.150 0.135 baseline 0.152 0.149 DFKI-svr-xdata 0.161 0.146 SHEFF-lite 0.182 0.168 Table 2: MAE of WMT-14 Task 1.2 systems for original HTER prediction distributions and when distributions are shifted and rescaled to the mean and half the standard deviation of the gold label distribution. Original Rescaled RMSE RMSE FBK-UPV-UEDIN-wp 0.167 0.166 DCU-rtm-svr 0.167 0.165 DCU-rtm-tree 0.175 0.169 DFKI-svr 0.177 0.171 USHEFF 0.178 0.178 FBK-UPV-UEDIN-nowp 0.181 0.180 SHEFF-lite-sparse 0.184 0.179 baseline 0.195 0.194 DFKI-svr-xdata 0.195 0.187 Multilizer 0.209 0.181 SHEFF-lite 0.234 0.216 Table 3: RMSE of WMT-14 Task 1.2 systems for original HTER prediction distributions and when distributions are shifted and rescaled to the mean and half the standard deviation of the gold label distribution. cance testing such as bootstrap resampling in combination with BLEU and other metrics have been empirically evaluated (Koehn, 2004; Graham et al., 2014), to the best of our knowledge no research has been carried out to assess the accuracy of similar methods specifically for quality estimation evaluation. In addition, since data used for evaluation of quality estimation systems are not independent, methods of significance testing differences in performance will be inaccurate unless the dependent nature of the data is taken into account. 3 Quality Estimation Evaluation by Pearson Correlation The Pearson correlation is a measure of the linear correlation between two variables, and in the case of quality estimation evaluation this amounts to the linear correlation between system predictions and gold labels. Pearson’s r overcomes the outlined challenges of previous approaches, such as mean absolute error, for several reasons. Firstly, Pearson’s r is a unit-free measure with a key property being that the correlation coefficient is invariant to separate changes in location and scale in either of the two variables. This has the obvious advantage over MAE that the coefficient cannot be altered by shifting or rescaling prediction score distributions according to aggregates specific to the test set. To illustrate, Figure 3 depicts a pair of systems for which the baseline system appears to outperform the other when evaluated with MAE, but this is only due to the conservative variance in its pre1807 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 (a) Raw baseline HTER Prediction (raw) HTER Gold (raw) MAE: 0.148 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 (b) baseline HTER Density Gold Prediction MAE: 0.148 −2 0 2 4 −1 0 1 2 3 4 (c) Stand. baseline Prediction PER (z) Gold PER (z) r: 0.451 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.2 0.4 0.6 0.8 1.0 (d) Raw CMU−ISL−full HTER Prediction (raw) HTER Gold (raw) MAE: 0.152 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 6 7 (e) CMU−ISL−full HTER Density Gold Prediction MAE: 0.152 −2 0 2 4 −1 0 1 2 3 4 (f) Stand. CMU−ISL−full Prediction PER (z) Gold PER (z) r: 0.494 Figure 3: WMT-13 Task 1.1 systems showing baseline with better MAE than CMU-ISL-FULL only due to conservative variance in prediction distribution and despite its weaker correlation with gold labels. diction score distribution, as can be seen by the narrow blue spike in Figure 3(b). Figure 3(e) shows how the prediction distribution of CMUISL-FULL, on the other hand, has higher variance, and subsequently higher MAE. Figures 3(c) and 3(f) depict what occurs in computation of the Pearson correlation where raw prediction and gold label scores are replaced by standardized scores, i.e. numbers of standard deviations from the mean of each distribution, where CMU-ISL-FULL in fact achieves a significantly higher correlation than the baseline system at p < 0.001. An additional advantage of the Pearson correlation is that coefficients do not change depending on the representation used in the gold standard in the way they do with MAE, making possible a comparison of performance across evaluations that employ different gold label representations. Additionally, there is no longer a need for training and test representations to directly correspond to one another. To demonstrate, in our later evaluation we include the evaluation of systems trained on both HTER and PETs for prediction of both HTER and PERs. Finally, when evaluated with the Pearson correlation significance tests can be applied without resorting to randomized methods, in addition to taking into account the dependent nature of data used in evaluations. The fact that the Pearson correlation is invariant to separate shifts in location and scale of either of the two variables is nonproblematic for evaluation of quality estimation systems. Take, for instance, the possible counter-argument: a pair of systems, one of which predicts the precise gold distribution, and another system predicting the gold distribution + 1, would unfairly receive the same Pearson correlation coefficient. Firstly, it is just as difficult to predict the gold distribution + 1, as it is to predict the gold distribution itself. More importantly, however, the scenario is extremely unlikely to occur in practice, it is highly unlikely that a system would ever accurately predict the gold distribution + 1, as opposed to the actual gold distribution unless training labels were adjusted in the same manner, or indeed predict the gold distribution shifted or rescaled by any other constant value. It is important to understand that invariance of the Pear1808 son correlation to a shift in location or scale means that the measure is only invariant to a shift in location or scale applied to the entire distribution (of either of the two variables), such as the shift in location and scale that can be used to boost apparent performance of systems when measures like MAE and RMSE, that are not unit-free, are employed. Increasing the distance between system predictions and gold labels for anything less than the entire distribution, a more realistic scenario, or by something other than a constant across the entire distribution, will result in an appropriately weaker Pearson correlation. 4 Quality Estimation Significance Testing Previous work has shown the suitability of Williams significance test (Williams, 1959) for evaluation of automatic MT metrics (Graham and Baldwin, 2014; Graham et al., 2015), and, for similar reasons, Williams test is appropriate for significance testing differences in performance of competing quality estimation systems which we detail further below. Evaluation of a given quality estimation system, Pnew, by Pearson correlation takes the form of quantifying the correlation, r(Pnew, G), that exists between system prediction scores and corresponding gold standard labels, and contrasting this correlation with the correlation for some baseline system, r(Pbase, G). At first it might seem reasonable to perform significance testing in the following manner when an increase in correlation with gold labels is observed: apply a significance test separately to the correlation of each quality estimation system with gold labels, with the hope that the new system will achieve a significant correlation where the baseline system does not. The reasoning here is flawed however: the fact that one correlation is significantly higher than zero (r(Pnew, G)) and that of another is not, does not necessarily mean that the difference between the two correlations is significant. Instead, a specific test should be applied to the difference in correlations on the data. For this same reason, confidence intervals for individual correlations with gold labels are also not useful. In psychology, it is often the case that samples that data are drawn from are independent, and differences in correlations are computed on independent data sets. In such cases, the Fisher r to z transformation is applied to test for significant differences in correlations. Data used for evaluation of quality estimation systems are not independent, however, and this means that if r(Pbase, G) and r(Pnew, G) are both > 0, the correlation between both sets of predictions themselves, r(Pbase, Pnew), must also be > 0. The strength of this correlation, directly between predictions of pairs of quality estimation systems, should be taken into account using a significance test of the difference in correlation between r(Pbase, G) and r(Pnew, G). Williams test 1 (Williams, 1959) evaluates the significance of a difference in dependent correlations (Steiger, 1980). It is formulated as follows as a test of whether the population correlation between X1 and X3 equals the population correlation between X2 and X3: t(n −3) = (r13 −r23) p (n −1)(1 + r12) q 2K (n−1) (n−3) + (r23+r13)2 4 (1 −r12)3 , where rij is the correlation between Xi and Xj, n is the size of the population, and: K = 1 −r122 −r132 −r232 + 2r12r13r23 As part of this research, we have made available an open-source implementation of statistical tests tailored to the assessment of quality estimation systems, at https://github.com/ ygraham/mt-qe-eval. 5 Evaluation and Discussion To demonstrate the use of the Pearson correlation as an effective mechanism for evaluation of quality estimation systems, we rerun components of previous evaluations originally carried out at WMT13 and WMT-14. Table 4 shows Pearson correlations for systems participating in WMT-13 Task 1.1 where gold labels were in the form of HTER scores. System rankings diverge considerably from original rankings, notably the top system according to the Pearson correlation is tied in fifth place when evaluated with MAE. Table 5 shows Pearson correlations of systems that took part in Task 1.2 of WMT-14, where gold labels were again in the form of HTER scores, 1Also known as Hotelling-Williams. 1809 System r MAE DCU-SYMC-rc 0.595 0.135 SHEFMIN-FS 0.575 0.124 DCU-SYMC-ra 0.572 0.135 CNGL-SVRPLS 0.560 0.133 CMU-ISL-noB 0.516 0.138 CNGL-SVR 0.508 0.138 CMU-ISL-full 0.494 0.152 fbk-uedin-extra 0.483 0.144 LIMSI-ELASTIC 0.475 0.133 SHEFMIN-FS-AL 0.474 0.130 LORIA-INCTRA-CONT 0.474 0.148 fbk-uedin-rsvr 0.464 0.145 LORIA-INCTRA 0.461 0.148 baseline 0.451 0.148 TCD-CNGL-OPEN 0.329 0.148 TCD-CNGL-RESTR 0.291 0.152 UMAC-EBLEU 0.113 0.170 Table 4: Pearson correlation and MAE of system HTER predictions and gold labels for English-toSpanish WMT-13 Task 1.1. and to demonstrate the ability of evaluation of systems trained on a representation distinct from that of gold labels made possible by the unit-free Pearson correlation, we also include evaluation of systems originally trained on PET labels to predict HTER scores. Since PET systems also produce predictions in the form of PET, we convert predictions for all systems to PERs prior to computation of correlations, as PERs more closely correspond to translation quality. Results reveal that systems originally trained on PETs in general perform worse than HTER trained systems, and this is not all that surprising considering the training representation did not correspond well to translation quality. Again system rankings diverge from MAE rankings with the second best system according to MAE moved to the initial position. Table 6 shows Pearson correlations for predictions of PER for systems trained on either PETs or HTER, and predictions for systems trained on PETs are converted to PER for evaluation. System rankings diverge most for this data set from the original rankings by MAE, as the system holding initial position according to MAE moves to position 13 according to the Pearson correlation. Many of the differences in correlation between systems in Tables 4, 5 and 6 are small and instead of assuming that an increase in correlation of one system over another corresponds to an improvement in performance, we first apply significance testing to differences in correlation with gold labels that exist between correlations for each pair Training QE Labels System r MAE HTER DCU-rtm-svr 0.550 0.134 HTER FBK-UPV-UEDIN-wp 0.540 0.129 HTER DCU-rtm-tree 0.518 0.140 HTER DFKI-svr 0.501 0.143 HTER USHEFF 0.432 0.136 HTER SHEFF-lite-sparse 0.428 0.150 HTER FBK-UPV-UEDIN-nowp 0.414 0.144 HTER Multilizer 0.409 0.150 PET DCU-rtm-rr 0.350 − HTER DFKI-svr-xdata 0.349 0.161 PET FBK-UPV-UEDIN-wp 0.346 − PET Multilizer-2 0.331 − PET Multilizer-1 0.328 − PET DCU-rtm-svr 0.315 − HTER baseline 0.283 0.152 PET FBK-UPV-UEDIN-nowp 0.279 − PET USHEFF 0.246 − PET baseline 0.246 − PET SHEFF-lite-sparse 0.229 − PET SHEFF-lite 0.194 − HTER SHEFF-lite 0.052 0.182 Table 5: Pearson correlation and MAE of system HTER predictions and gold labels for English-toSpanish WMT-14 Task 1.2 and 1.3 systems trained on either HTER or PET labelled data. Training QE Labels System r MAE HTER FBK-UPV-UEDIN-wp 0.529 − PET FBK-UPV-UEDIN-wp 0.472 0.972 HTER FBK-UPV-UEDIN-nowp 0.452 − HTER USHEFF 0.444 − HTER DCU-rtm-svr 0.444 − HTER DCU-rtm-tree 0.442 − HTER SHEFF-lite-sparse 0.441 − PET DCU-rtm-rr 0.430 0.932 PET FBK-UPV-UEDIN-nowp 0.423 1.012 HTER DFKI-svr 0.412 − PET USHEFF 0.394 1.358 PET baseline 0.394 1.359 PET DCU-rtm-svr 0.365 0.915 HTER Multilizer 0.361 − PET SHEFF-lite-sparse 0.337 0.951 PET SHEFF-lite 0.323 0.940 PET Multilizer-1 0.288 0.993 HTER baseline 0.286 − HTER DFKI-svr-xdata 0.277 − PET Multilizer-2 0.271 0.972 HTER SHEFF-lite 0.011 − Table 6: Pearson correlation of system PER predictions and gold labels for English-to-Spanish WMT-14 Task 1.2 and 1.3 systems trained on either HTER or PET labelled data, mean absolute error (MAE) provided are in seconds per word. 1810 r DCU.rtm.svr FBK.UPV.UEDIN.wp DCU.rtm.tree DFKI.svr USHEFF SHEFF.lite.sparse FBK.UPV.UEDIN.nowp Multilizer DFKI.svr.xdata baseline SHEFF.lite SHEFF−lite baseline DFKI−svr−xdata Multilizer FBK−UPV−UEDIN−nowp SHEFF−lite−sparse USHEFF DFKI−svr DCU−rtm−tree FBK−UPV−UEDIN−wp DCU−rtm−svr Figure 4: Pearson correlation between prediction scores for all pairs of systems participating in WMT-14 Task 1.2 of systems. 5.1 Significance Tests When an increase in correlation with gold labels is present for a pair of systems, significance tests provide insight into the likelihood that such an increase has occurred by chance. As described in detail in Section 4, the Williams test (Williams, 1959), a test also appropriate for MT metrics evaluated by the Pearson correlation (Graham and Baldwin, 2014), is appropriate for testing the significance of a difference in dependent correlations and therefore provides a suitable method of significance testing for quality estimation systems. Figure 4 provides an example of the strength of correlations that commonly exist between predictions of quality estimation systems. Figure 5 shows significance test outcomes of the Williams test for systems originally taking part in WMT-13 Task 1.1, with systems ordered by strongest to least Pearson correlation with gold labels, where a green cell in (row i, column j) signifies a significant win for row i system over column j system, where darker shades of green signify conclusions made with more certainty. Test outcomes allow identification of significant increases DCU.SYMC.rc SHEFMIN.FS DCU.SYMC.ra CNGL.SVRPLS CMU.ISL.noB CNGL.SVR CMU.ISL.full fbk.uedin.extra LIMSI.ELASTIC SHEFMIN.FS.AL LORIA.INCTR.CONT fbk.uedin.r.svr LORIA_INCTR baseline TCD.CNGL.OPEN TCD.CNGL.RESTR UMAC_EBLEU UMAC_EBLEU TCD−CNGL−RESTR TCD−CNGL−OPEN baseline LORIA−INCTRA fbk−uedin−rand−svr LORIA_INCTR−CONT SHEFMIN−FS−AL LIMSI−ELASTIC fbk−uedin−extra CMU−ISL−full CNGL−SVR CMU−ISL−noB CNGL−SVRPLS DCU−SYMC−ra SHEFMIN−FS DCU−SYMC−rc Figure 5: HTER prediction significance test outcomes for all pairs of systems from English-toSpanish WMT-13 Task 1.1, colored cells denote a significant increase in correlation with gold labels for row i system over column j system. in correlation with gold labels of one system over another, and subsequently the systems shown to outperform others. Test outcomes in Figure 5 reveal substantially increased conclusivity in system rankings made possible with the application of the Pearson correlation and Williams test, with almost an unambiguous total-order ranking of systems and an outright winner of the task. Figure 6 shows outcomes of Williams significance tests for prediction of HTER and Figure 7 shows outcomes of tests for PER prediction for WMT-14 English-to-Spanish, again showing substantially increased conclusivity in system rankings for tasks. It is important to note that the number of competing systems a system significantly outperforms should not be used as the criterion for ranking competing quality estimation systems, since the power of the Williams test changes depending on the degree to which predictions of a pair of systems correlate with each other. A system with predictions that happen to correlate strongly with predictions of many other systems would be at an unfair advantage, were numbers of significant wins to be used to rank systems. For this reason, it is 1811 HTER.DCU.rtm.svr HTER.FBK.UPV.UEDIN.wp HTER.DCU.rtm.tree HTER.DFKI.svr HTER.USHEFF HTER.SHEFF.lite.sparse HTER.FBK.UPV.UEDIN.nowp HTER.Multilizer PET.DCU.rtm.rr HTER.DFKI.svr.xdata PET.FBK.UPV.UEDIN.wp PET.Multilizer.2 PET.Multilizer.1 PET.DCU.rtm.svr HTER.baseline PET.FBK.UPV.UEDIN.nowp PET.USHEFF PET.baseline PET.SHEFF.lite.sparse PET.SHEFF.lite HTER.SHEFF.lite HTER−SHEFF−lite PET−SHEFF−lite PET−SHEFF−lite−sparse PET−baseline PET−USHEFF PET−FBK−UPV−UEDIN−nowp HTER−baseline PET−DCU−rtm−svr PET−Multilizer−1 PET−Multilizer−2 PET−FBK−UPV−UEDIN−wp HTER−DFKI−svr−xdata PET−DCU−rtm−rr HTER−Multilizer HTER−FBK−UPV−UEDIN−nowp HTER−SHEFF−lite−sparse HTER−USHEFF HTER−DFKI−svr HTER−DCU−rtm−tree HTER−FBK−UPV−UEDIN−wp HTER−DCU−rtm−svr Figure 6: HTER prediction significance test outcomes for all pairs of systems from English-toSpanish WMT-14 Task 1.2, colored cells denote a significant increase in correlation with gold labels for row i system over column j system. HTER.FBK.UPV.UEDIN.wp PET.FBK.UPV.UEDIN.wp HTER.FBK.UPV.UEDIN.nowp HTER.USHEFF HTER.DCU.rtm.svr HTER.DCU.rtm.tree HTER.SHEFF.lite.sparse PET.DCU.rtm.rr PET.FBK.UPV.UEDIN.nowp HTER.DFKI.svr PET.USHEFF PET.baseline PET.DCU.rtm.svr HTER.Multilizer PET.SHEFF.lite.sparse PET.SHEFF.lite PET.Multilizer.1 HTER.baseline HTER.DFKI.svr.xdata PET.Multilizer.2 HTER.SHEFF.lite HTER−SHEFF−lite PET−Multilizer−2 HTER−DFKI−svr−xdata HTER−baseline PET−Multilizer−1 PET−SHEFF−lite PET−SHEFF−lite−sparse HTER−Multilizer PET−DCU−rtm−svr PET−baseline PET−USHEFF HTER−DFKI−svr PET−FBK−UPV−UEDIN−nowp PET−DCU−rtm−rr HTER−SHEFF−lite−sparse HTER−DCU−rtm−tree HTER−DCU−rtm−svr HTER−USHEFF HTER−FBK−UPV−UEDIN−nowp PET−FBK−UPV−UEDIN−wp HTER−FBK−UPV−UEDIN−wp Figure 7: PER prediction significance test outcomes for all pairs of systems from English-toSpanish WMT-14 Task 1.3, colored cells denote a significant increase in correlation with gold labels for row i system over column j system. best to interpret pairwise system tests in isolation. 6 Conclusion We have provided a critique of current widely used methods of evaluation of quality estimation systems and highlighted potential flaws in existing methods, with respect to the ability to boost scores by prediction of aggregate statistics specific to the particular test set in use or conservative variance in prediction distributions. We provide an alternate mechanism, and since the Pearson correlation is a unit-free measure, it can be applied to evaluation of quality estimation systems avoiding the previous vulnerabilities of measures such as MAE and RMSE. Advantages also outlined are that training and test representations no longer need to directly correspond in evaluations as long as labels comprise a representation that closely reflects translation quality. We demonstrated the suitability of the proposed measures through replication of components of WMT-13 and WMT-14 quality estimation shared tasks, revealing substantially increased conclusivity of system rankings. Acknowledgements We wish to thank the anonymous reviewers for their valuable comments and WMT organizers for the provision of data sets. This research is supported by Science Foundation Ireland through the CNGL Programme (Grant 12/CE/I2267) in the ADAPT Centre (www.adaptcentre.ie) at Trinity College Dublin. References J. Blatz, E. Fitzgerald, G. Foster, S. Gandrabur, C. Goutte, A. Kulesza, A. Sanchis, and N. Ueffing. 2004. Confidence estimation for machine translation. In Proceedings of the 20th international conference on Computational Linguistics, pages 315– 321. Association for Computational Linguistics. O. Bojar, C. Buck, C. Federmann, B. Haddow, P. Koehn, J. Leveling, C. Monz, P. Pecina, M. Post, H. Saint-Amand, R. Soricut, L. Specia, and A. Tamchyna. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proc. 9th Wkshp. Statistical Machine Translation, Baltimore, MA. Association for Computational Linguistics. Y. Graham and T. Baldwin. 2014. Testing for significance of increased correlation with human judgment. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 172–176, Doha, Qatar. Association for Computational Linguistics. Y. Graham, N. Mathur, and T. Baldwin. 2014. Randomized significance tests in machine translation. 1812 In Proceedings of the ACL 2014 Ninth Workshop on Statistical Machine Translation, pages 266–274. Association for Computational Linguistics. Yvette Graham, Nitika Mathur, and Timothy Baldwin. 2015. Accurate evaluation of segment-level machine translation metrics. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies, Denver, Colorado. P. Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proc. of Empirical Methods in Natural Language Processing, pages 388–395, Barcelona, Spain. Association for Computational Linguistics. E. Moreau and C. Vogel. 2014. Limitations of mt quality estimation supervised systems: The tails prediction problem. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics, pages 2205–2216. M. Snover, N. Madnani, B.J. Dorr, and R. Schwartz. 2009. Fluency, adequacy, or hter?: exploring different human judgments with a tunable mt metric. In Proceedings of the Fourth Workshop on Statistical Machine Translation, pages 259–268. Association for Computational Linguistics. L. Specia, M. Turchi, N. Cancedda, M. Dymetman, and N. Cristianini. 2009. Estimating the sentence-level quality of machine translation systems. In 13th Conference of the European Association for Machine Translation, pages 28–37. J.H. Steiger. 1980. Tests for comparing elements of a correlation matrix. Psychological Bulletin, 87(2):245. E.J. Williams. 1959. Regression analysis, volume 14. Wiley New York. 1813
2015
174
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 177–187, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Stacked Ensembles of Information Extractors for Knowledge-Base Population Nazneen Fatema Rajani∗ Vidhoon Viswanathan∗Yinon Bentor Raymond J. Mooney Department of Computer Science University of Texas at Austin Austin, TX 78712, USA {nrajani,vidhoon,yinon,mooney}@cs.utexas.edu Abstract We present results on using stacking to ensemble multiple systems for the Knowledge Base Population English Slot Filling (KBP-ESF) task. In addition to using the output and confidence of each system as input to the stacked classifier, we also use features capturing how well the systems agree about the provenance of the information they extract. We demonstrate that our stacking approach outperforms the best system from the 2014 KBPESF competition as well as alternative ensembling methods employed in the 2014 KBP Slot Filler Validation task and several other ensembling baselines. Additionally, we demonstrate that including provenance information further increases the performance of stacking. 1 Introduction Using ensembles of multiple systems is a standard approach to improving accuracy in machine learning (Dietterich, 2000). Ensembles have been applied to a wide variety of problems in natural language processing, including parsing (Henderson and Brill, 1999), word sense disambiguation (Pedersen, 2000), and sentiment analysis (Whitehead and Yaeger, 2010). This paper presents a detailed study of ensembling methods for the TAC Knowledge Base Population (KBP) English Slot Filling (ESF) task (Surdeanu, 2013; Surdeanu and Ji, 2014). We demonstrate new state-of-the-art results on this KBP task using stacking (Wolpert, 1992), which trains a final classifier to optimally combine the results of multiple systems. We present results for stacking all systems that competed in both the 2013 and 2014 KBP-ESF tracks, training ∗These authors contributed equally on 2013 data and testing on 2014 data. The resulting stacked ensemble outperforms all systems in the 2014 competition, obtaining an F1 of 48.6% compared to 39.5% for the best performing system in the most recent competition. Although the associated KBP Slot Filler Validation (SFV) Track (Wang et al., 2013; Yu et al., 2014; Sammons et al., 2014) is officially focused on improving the precision of individual existing systems by filtering their results, frequently participants in this track also combine the results of multiple systems and also report increased recall through this use of ensembling. However, SFV participants have not employed stacking, and we demonstrate that our stacking approach outperforms existing published SFV ensembling systems. KBP ESF systems must also provide provenance information, i.e. each extracted slot-filler must include a pointer to a document passage that supports it (Surdeanu and Ji, 2014). Some SFV systems have used this provenance information to help filter and combine extractions (Sammons et al., 2014). Therefore, we also explored enhancing our stacking approach by including additional input features that capture provenance information. By including features that quantify how much the ensembled systems agree on provenance, we further improved our F1 score for the 2014 ESF task to 50.1%. The remainder of the paper is organized as follows. Section 2 provides background information on existing KBP-ESF systems and stacking. Section 3 provides general background on the KBPESF task. Section 4 describes our stacking approach, including how provenance information is used. Section 5 presents comprehensive experiments comparing this approach to existing results and several additional baselines, demonstrating new state-of-the-art results on KBP-ESF. Section 6 reviews prior related work on ensembling 177 for information extraction. Section 7 presents our final conclusions and proposed directions for future research. 2 Background For the past few years, NIST has been conducting the English Slot Filling (ESF) Task in the Knowledge Base Population (KBP) track among various other tasks as a part of the Text Analysis Conference(TAC)(Surdeanu, 2013; Surdeanu and Ji, 2014). In the ESF task, the goal is to fill specific slots of information for a given set of query entities (people or organizations) based on a supplied text corpus. The participating systems employ a variety of techniques in different stages of the slot filling pipeline, such as entity search, relevant document extraction, relation modeling and inference. In 2014, the top performing system, DeepDive with Expert Advice from Stanford University (Wazalwar et al., 2014), employed distant supervision (Mintz et al., 2009) and Markov Logic Networks (Domingos et al., 2008) in their learning and inferencing system. Another system, RPI BLENDER (Hong et al., 2014), used a restricted fuzzy matching technique in a framework that learned event triggers and employed them to extract relations from documents. Given the diverse set of slot-filling systems available, it is interesting to explore methods for ensembling these systems. In this regard, TAC also conducts a Slot Filler Validation (SFV) task who goal is to improve the slot-filling performance using the output of existing systems. The input for this task is the set of outputs from all slotfilling systems and the expected output is a filtered set of slot fills. As with the ESF task, participating systems employ a variety of techniques to perform validation. For instance, RPI BLENDER used a Multi-dimensional Truth Finding model (Yu et al., 2014) which is an unsupervised validation approach based on computing multidimensional credibility scores. The UI CCG system (Sammons et al., 2014) developed two different validation systems using entailment and majority voting. However, stacking (Sigletos et al., 2005; Wolpert, 1992) has not previously been employed for ensembling KBP-ESF systems. In stacking, a meta-classifier is learned from the output of multiple underlying systems. In our work, we translate this to the context of ensembling slot filling systems and build a stacked meta-classifier that learns to combine the results from individual slot filling systems. We detail our stacking approach for ensembling existing slot filling systems in Section 4. 3 Overview of KBP Slot Filling Task The goal of the TAC KBP-ESF task (Surdeanu, 2013; Surdeanu and Ji, 2014) is to collect information (fills) about specific attributes (slots) for a set of entities (queries) from a given corpus. The queries vary in each year of the task and can be either a person (PER) or an organization (ORG) entity. The slots are fixed and are listed in Table 1 by entity type. Some slots (like per:age) are single-valued while others (like per:children) are list-valued i.e., they can take multiple slot fillers. 3.1 Input and Output The input for the task is a set of queries and the corpus in which to look for information. The queries are provided in an XML format containing basic information including an ID for the query, the name of the entity, and the type of entity (PER or ORG). The corpus consists of documents format from discussion forums, newswire and the Internet. Each document is identified by a unique document ID. The output for the task is a set of slot fills for each input query. Depending on the type, each query should have a NIL or one or more lines of output for each of the corresponding slots. The output line for each slot fill contains the fields shown in Table 2. The query ID in Column 1 should match the ID of the query given as input. The slot name (Column 2) is one of the slots listed in Table 1 based on entity type. Run ID (Column 3) is a unique identifier for each system. Column 4 contains a NIL filler if the system could not find any relevant slot filler. Otherwise, it contains the relation provenance. Provenance is of the form docid:startoffset-endoffset, where docid specifies a source document from the corpus and the offsets demarcate the text in this document supporting the relation. The offsets correspond to the spans of the candidate document that describe the relation between the query entity and the extracted slot filler. Column 5 contains the extracted slot filler. Column 6 is a filler provenance that is similar in format to relation provenance but in this case the offset corresponds to the portion of the document containing the extracted filler. Column 7 is a confi178 Person Organization per:alternate names per:cause of death org:country of headquarters org:founded by per:date of birth per:countries of residence org:stateorprovince of headquarters org:date dissolved per:age per:statesorprovinces of residence org:city of headquarters org:website per:parents per:cities of residence org:shareholders org:date founded per:spouse per:schools attended org:top members employees org:members per:city of birth per:city of death org:political religious affiliation org:member of per:origin per:stateorprovince of death org:number of employees members org:subsidiaries per:other family per:country of death org:alternate names org:parents per:title per:employee or member of per:religion per:stateorprovince of birth per:children per:country of birth per:siblings per:date of death per:charges Table 1: Slots for PER and ORG queries dence score which systems can provide to indicate their certainty in the extracted information. 3.2 Scoring The scoring for the ESF task is carried out as follows. The responses from all slot-filling systems are pooled and a key file is generated by having human assessors judge the correctness of these responses. In addition, LDC includes a manual key of fillers that were determined by human judges. Using the union of these keys as the gold standard, precision, recall, and F1 scores are computed. Column Field Description Column 1 Query ID Column 2 Slot name Column 3 Run ID Column 4 NIL or Relation Provenance Column 5 Slot filler Column 6 Filler Provenance Column 7 Confidence score Table 2: SF Output line fields 4 Ensembling Slot-Filling Systems Given a set of query entities and a fixed set of slots, the goal of ensembling is to effectively combine the output of different slot-filling systems. The input to the ensembling system is the output of individual systems (in the format described in previous section) containing slot fillers and additional information such as provenance and confidence scores. The output of the ensembling system is similar to the output of an individual system, but it productively aggregates the slot fillers from different systems. 4.1 Algorithm This section describes our ensembling approach which trains a final binary classifier using features that help judge the reliability and thus correctness of individual slot fills. In a final post-processing step, the slot fills that get classified as “correct” by the classifier are kept while the others are set to NIL. 4.1.1 Stacking Stacking is a popular ensembling method in machine learning (Wolpert, 1992) and has been successfully used in many applications including the top performing systems in the Netflix competition (Sill et al., 2009). The idea is to employ multiple learners and combine their predictions by training a “meta-classifier” to weight and combine multiple models using their confidence scores as features. By training on a set of supervised data that is disjoint from that used to train the individual models, it learns how to combine their results into an improved ensemble model. We employ a single classifier to train and test on all slot types using an L1-regularized SVM with a linear kernel (Fan et al., 2008). 4.1.2 Using Provenance As discussed above, each system provides provenance information for every non-NIL slot filler. There are two kinds of provenance provided: the relation provenance and the filler provenance. In our algorithm, we only use the filler provenance for a given slot fill. This is because of the changes in the output formats for the ESF task from 2013 to 2014. Specifically, the 2013 specification requires separate entity and justification provenance fields, but the 2014 collapses these into a single relation provenance field. An additional filler provenance 179 field is common to both specifications. Hence, we use the filler provenance that is common between 2013 and 2014 formats. As described earlier, every provenance has a docid and startoffsetendoffset that gives information about the document and offset in the document from where the slot fill has been extracted. The UI-CCG SFV system Sammons et al. (2014) effectively used this provenance information to help validate and filter slot fillers. This motivated us to use provenance in our stacking approach as additional features as input to the meta-classifier. We use provenance in two ways, first using the docid information, and second using the offset information. We use the docids to define a document-based provenance score in the following way: for a given query and slot, if N systems provide answers and a maximum of n of those systems give the same docid in their filler provenance, then the document provenance score for those n slot fills is n/N. Similarly, other slot fills are given lower scores based on the fraction of systems whose provenance document agree with theirs. Since this provenance score is weighted by the number of systems that refer to the same provenance, it measures the reliability of a slot fill based on the document from where it was extracted. Our second provenance measure uses offsets. The degree of overlap among the various systems’ offsets can also be a good indicator of the reliability of the slot fill. The Jaccard similarity coefficient is a statistical measure of similarity between sets and is thus useful in measuring the degree of overlap among the offsets of systems. Slot fills have variable lengths and thus the provenance offset ranges are variable too. A metric such as the Jaccard coefficient captures the overlapping offsets along with normalizing based on the union and thus resolving the problem with variable offset ranges. For a given query and slot, if N systems that attempt to fill it have the same docid in their document provenance, then the offset provenance (OP) score for a slot fill by a system x is calculated as follows: OP(x) = 1 |N| × X i∈N,i̸=x |offsets(i) ∩offsets(x)| |offsets(i) ∪offsets(x)| Per our definition, systems that extract slot fills from different documents for the same query slot have zero overlap among offsets. We note that the offset provenance is always used along with the document provenance and thus useful in discriminating slot fills extracted from a different document for the same query slot. Like the document provenance score, the offset provenance score is also a weighted feature and is a measure of reliability of a slot fill based on the offsets in the document from where it is extracted. Unlike past SFV systems that use provenance for validation, our approach does not need access to the large corpus of documents from where the slot fills are extracted and is thus very computationally inexpensive. 4.2 Eliminating Slot-Filler Aliases When combining the output of different ESF systems, it is possible that some slot-filler entities might overlap with each other. An ESF system could extract a filler F1 for a slot S while another ESF system extracts another filler F2 for the same slot S. If the extracted fillers F1 and F2 are aliases (i.e. different names for the same entity), the scoring system for the TAC KBP SF task considers them redundant and penalizes the precision of the system. In order to eliminate aliases from the output of ensembled system, we employ a technique derived by inverting the scheme used by the LSV ESF system (Roth et al., 2013) for query expansion. LSV ESF uses a Wikipedia anchor-text model (Roth and Klakow, 2010) to generate aliases for given query entities. By including aliases for query names, the ESF system increase the number of candidate sentences fetched for the query. To eliminate filler aliases, we apply the same technique to generate aliases for all slot fillers of a given query and slot type. Given a slot filler, we obtain the Wikipedia page that is most likely linked to the filler text. Then, we obtain the anchor texts and their respective counts from all other Wikipedia pages that link to this page. Using these counts, we choose top N (we use N=10 as in LSV) and pick the corresponding anchor texts as aliases for the given slot filler. Using the generated aliases, we then verify if any of the slot fillers are redundant with respect to these aliases. This scheme is not applicable to slot types whose fillers are not entities (like date or age). Therefore, simpler matching schemes are used to eliminate redundancies for these slot types. 180 Common systems dataset All 2014 SFV systems dataset Figure 1: Precision-Recall curves for identifying the best voting performance on the two datasets 5 Experimental Evaluation This section describes a comprehensive set of experiments evaluating ensembling for the KBP ESF task. Our experiments are divided into two subsets based on the datasets they employ. Since our stacking approach relies on 2013 SFV data for training, we build a dataset of one run for every team that participated in both the 2013 and 2014 competitions and call it the common systems dataset. There are 10 common teams of the 17 teams that participated in ESF 2014. The other dataset comprises of all 2014 SFV systems (including all runs of all 17 teams that participated in 2014). There are 10 systems in the common systems dataset, while there are 65 systems in the all 2014 SFV dataset. Table 3 gives a list of the common systems for 2013 and 2014 ESF task. ESF systems do change from year to year and it’s not a perfect comparison, but systems generally get better every year and thus we are probably only underperforming. Common Systems LSV IIRG UMass IESL Stanford BUPT PRIS RPI BLENDER CMUML NYU Compreno UWashington Table 3: Common teams for 2013 and 2014 ESF 5.1 Methodology and Results For our unsupervised ensembling baselines, we evaluate on both the common systems dataset as well as the entire 2014 SFV dataset. We compare our stacking approach to three unsupervised baselines. The first is Union which takes the combination of values for all systems to maximize recall. If the slot type is list-valued, it classifies all slot fillers as correct and always includes them. If the slot type is single-valued, if only one systems attempts to answer it, then it includes that system’s slot fill. Otherwise if multiple systems produce a response, it only includes the slot fill with the highest confidence value as correct and discards the rest. The second baseline is Voting. For this approach, we vary the threshold on the number of systems that must agree on a slot fill from one to all. This gradually changes the system from the union to intersection of the slot fills, and we identify the threshold that results in the highest F1 score. We learn a threshold on the 2013 SFV dataset (containing 52 systems) that results in the best F1 score. We use this threshold for the voting baseline on 2014 SFV dataset. As we did for the 2013 common systems dataset, we learn a threshold on the 2013 common systems that results in the best F1 score and use this threshold for the voting baseline on 2014 common systems. The third baseline is an “oracle threshold” version of Voting. Since the best threshold for 2013 may not necessarily be the best threshold for 2014, we identify the best threshold for 2014 by plotting a Precision-Recall curve and finding the best F1 score for the voting baseline on both the SFV and common systems datasets. Figure 1 shows the 181 Figure 2: Our system pipeline for evaluating supervised ensembling approaches Baseline Precision Recall F1 Union 0.067 0.762 0.122 Voting (threshold learned on 2013 data) 0.641 0.288 0.397 Voting (optimal threshold for 2014 data) 0.547 0.376 0.445 Table 4: Performance of baselines on all 2014 SFV dataset (65 systems) Approach Precision Recall F1 Union 0.176 0.647 0.277 Voting (threshold learned on 2013 data) 0.694 0.256 0.374 Best ESF system in 2014 (Stanford) 0.585 0.298 0.395 Voting (optimal threshold for 2014 data) 0.507 0.383 0.436 Stacking 0.606 0.402 0.483 Stacking + Relation 0.607 0.406 0.486 Stacking + Provenance (document) 0.499 0.486 0.492 Stacking + Provenance (document) + Relation 0.653 0.400 0.496 Stacking + Provenance (document and offset) + Relation 0.541 0.466 0.501 Table 5: Performance on the common systems dataset (10 systems) for various configurations. All approaches except the Stanford system are our implementations. Precision-Recall curve for two datasets for finding the best possible F1 score using the voting baseline. We find that for the common systems dataset, a threshold of 3 (of 10) systems gives the best F1 score, while for the entire 2014 SFV dataset, a threshold of 10 (of 65) systems gives the highest F1. Note that this gives an upper bound on the best results that can be achieved with voting, assuming an optimal threshold is chosen. Since the upper bound can not be predicted without using the 2014 dataset, this baseline has an unfair advantage. Table 4 shows the performance of all 3 baselines on the all 2014 SFV systems dataset. For all our supervised ensembling approaches, we train on the 2013 SFV data and test on the 2014 data for the common systems. We have 5 different supervised approaches. Our first approach is stacking the common systems using their confidence scores to learn a classifier. As discussed earlier, in stacking we train a metaclassifier that combines the systems using their confidence scores as features. Since the common systems dataset has 10 systems, this classifier uses 10 features. The second approach also provides stacking with a nominal feature giving the relation name (as listed in Table 1) for the given slot instance. This allows the system to learn different evidence-combining functions for different slot types if the classifier finds this useful. For our third approach, we also provide the document provenance feature described in Section 4.1. Altogether this approach has 11 features (10 confidence score + 1 document provenance score). The fourth approach uses confidences, the document provenance feature, and a one-hot encoding of the relation name for the slot instance. Our final approach also includes the offset provenance (OP) feature discussed in Section 4.1. There are altogether 13 features in this approach. All our supervised approaches use the Weka package (Hall et al., 2009) for training the meta-classifier, using an L1-regularized SVM with a linear kernel (other classifiers gave similar results). Figure 2 shows our system pipeline for evaluating supervised ensembling approaches. Table 5 gives the performance of all our supervised approaches as well as 182 our unsupervised baselines for the common systems dataset. Analysis by Surdeanu and Ji (2014) suggests that 2014 ESF queries are more difficult than those for 2013. They compare two systems by running both on 2013 and 2014 data and find there is a considerable drop in the performance of both the systems. We note that they run the same exact system on 2013 and 2014 data. Thus, in order to have a better understanding of our results, we plot a learning curve by training on different sizes of the 2013 SFV data and using the scorer to measure the F1 score on the 2014 SFV data for the 10 common systems. Figure 3 shows the learning curve thus obtained. Although there are certain parts of the dataset when the F1 score drops which we suspect is due to overfitting the 2013 data, there is still a strong correlation between the 2013 training data size and F1 score on the 2014 dataset. Thus we can infer that training on 2013 data is useful even though the 2013 and 2014 data are fairly different. Although the queries change, the common systems remain more-or-less the same and stacking enables a meta-classifier to weigh those common systems based on their 2013 performance. Figure 3: Learning curve for training on 2013 and testing on 2014 common systems dataset To further validate our approach, we divide the 2013 SFV data based on the systems that extracted those slot fills. Then we sort the systems, from higher to lower, based on the number of false positives produced by them in the ensembling approach. Next we train a classifier in an incremental fashion adding one system’s slot fills for training at each step and analyzing the performance on 2014 data. This allows us to analyze the results at the system level. Figure 4 shows the plot of F1 score vs. the number of systems at each step. The figure shows huge improvement in F1 score at steps 6 and 7. At step 6 the Stanford system is added to the pool of systems which is the best performing ESF system in 2014 and fourth best in 2013. At step 7, the UMass system is added to the pool and, although the system on it own is weak, it boosts the performance of our ensembling approach. This is because the UMass system alone contributes approximately 24% of the 2013 training data (Singh et al., 2013). Thus adding this one system significantly improves the training step leading to better performance. We also notice that our system becomes less conservative at this step and has higher recall. The reason for this is that the systems from 1 to 5 had very high precision and low recall whereas from system 6 onwards the systems have high recall. Thus adding the UMass system enables our meta-classifier to have a higher recall for small decrease in precision and thus boosting the overall F1 measure. Without it, the classifier produces high precision but low recall and decreases the overall F1 score by approximately 6 points. Figure 4: Incrementally training on 2013 by adding a system at each step and testing on 2014 common systems dataset We also experimented with cross validation within the 2014 dataset. Since we used only 2014 data for this experiment, we also included the relation provenance as discussed in Section 4.1.2. Table 6 shows the results on 10-fold cross-validation on 2014 data with only the filler provenance and with both the filler and relation provenance. The performance of using only the filler provenance is slightly worse than training on 2013 because the 2014 SFV data has many fewer instances but uses more systems for learning compared to the 2013 183 Approach Precision Recall F1 Stacking + Filler provenance + Relation 0.606 0.415 0.493 Stacking + Filler and Relation provenance + Relation 0.609 0.434 0.506 Table 6: 10-fold Cross-Validation on 2014 SFV dataset (65 systems) Baseline Precision Recall F1 Union 0.054 0.877 0.101 Voting (threshold learned on 2013 data) 0.637 0.406 0.496 Voting (optimal threshold for 2014 data) 0.539 0.526 0.533 Table 7: Baseline performance on all 2014 SFV dataset (65 systems) using unofficial scorer Approach Precision Recall F1 Union 0.177 0.922 0.296 Voting (threshold learned on 2013 data) 0.694 0.256 0.374 Best published SFV result in 2014 (UIUC) 0.457 0.507 0.481 Voting (optimal threshold for 2014 data) 0.507 0.543 0.525 Stacking + Provenance(document) 0.498 0.688 0.578 Stacking 0.613 0.562 0.586 Stacking + Relation 0.613 0.567 0.589 Stacking + Provenance (document and offset) + Relation 0.541 0.661 0.595 Stacking + Provenance (document) + Relation 0.659 0.56 0.606 Table 8: Performance on the common systems dataset (10 systems) for various configurations using the unofficial scorer. All approaches except the UIUC system are our implementations. SFV data. The TAC KBP official scoring key for the ESF task includes human annotated slot fills along with the pooled slot fills obtained by all participating systems. However, Sammons et al. (2014) use an unofficial scoring key in their paper that does not include human annotated slot fills. In order to compare to their results, we also present results using the same unofficial key. Table 7 gives the performance of our baseline systems on the 2014 SFV dataset using the unofficial key for scoring. We note that our Union does not produce a recall of 1.0 on the unofficial scorer due to our singlevalued slot selection strategy for multiple systems. As discussed earlier for the single-valued slot, we include the slot fill with highest confidence (which may not necessarily be correct) and thus may not match the unofficial scorer. Table 8 gives the performance of all our supervised approaches along with the baselines on the common systems dataset using the unofficial key for scoring. UIUC is one of the two teams participating in the SFV 2014 task and the only team to report results, but they report 6 different system configurations and we show their best performance. 5.2 Discussion Our results indicate that stacking with provenance information and relation type gives the best performance using both the official ESF scorer as well as the unofficial scorer that excludes the humangenerated slot fills. Our stacking approach that uses the 10 systems common between 2013 and 2014 also outperforms the ensembling baselines that have the advantage of using all 65 of the 2014 systems. Our stacking approach would presumably perform even better if we had access to 2013 training data for all 2014 systems. Of course, the best-performing ESF system for 2014 did not have access to the pooled slot fills of all participating systems. Although pooling the results has an advantage, naive pooling methods such as the ensembling baselines, in particular the voting approach, do not perform as well as our stacked ensembles. Our best approach outperforms the best baseline for both the datasets by at least 6 F1 points using both the official and unof184 ficial scorer. As expected the Union baseline has the highest recall. Among the supervised approaches, stacking with document provenance produces the highest precision and is significantly higher (approximately 5%) than the approach that produces the second highest precision. As discussed earlier, we also scored our approaches on the unofficial scorer so that we can compare our results to the UIUC system that was the best performer in the 2014 SFV task. Our best approach beats their best system configuration by a F1 score of 12 points. Our stacking approach also outperforms them on precision and recall by a large margin. 6 Related Work Our system is part of a body of work on increasing the performance of relation extraction through ensemble methods. The use of stacked generalization for information extraction has been demonstrated to outperform both majority voting and weighted voting methods (Sigletos et al., 2005). In relation extraction, a stacked classifier effectively combines a supervised, closed-domain Conditional Random Field-based relation extractor with an opendomain CRF Open IE system, yielding a 10% increase in precision without harming recall (Banko et al., 2008). To our knowledge, we are the first to apply stacking to KBP and the first to use provenance as a feature in a stacking approach. Many KBP SFV systems cast validation as a single-document problem and apply a variety of techniques, such as rule-based consistency checks (Angeli et al., 2013), and techniques from the well-known Recognizing Textual Entailment (RTE) task (Cheng et al., 2013; Sammons et al., 2014). In contrast, the 2013 JHUAPL system aggregates the results of many different extractors using a constraint optimization framework, exploiting confidence values reported by each input system (Wang et al., 2013). A second approach in the UI CCG system (Sammons et al., 2014) aggregates results of multiple systems by using majority voting. In the database, web-search, and data-mining communities, a line of research into “truthfinding” or “truth-discovery” methods addresses the related problem of combining evidence for facts from multiple sources, each with a latent credibility (Yin et al., 2008). The RPI BLENDER KBP system (Yu et al., 2014) casts SFV in this framework, using a graph propagation method that modeled the credibility of systems, sources, and response values. However they only report scores on the 2013 SFV data which contain less complicated and easier queries compared to the 2014 data. Therefore, we cannot directly compare our system’s performance to theirs. Google’s Knowledge Vault system (Dong et al., 2014) combines the output of four diverse extraction methods by building a boosted decision stump classifier (Reyzin and Schapire, 2006). For each proposed fact, the classifier considers both the confidence value of each extractor and the number of responsive documents found by the extractor. A separate classifier is trained for each predicate, and Platt Scaling (Platt, 1999) is used to calibrate confidence scores. 7 Conclusion This paper has presented experimental results showing that stacking is a very promising approach to ensembling KBP systems. From our literature survey, we observe that we are the first to employ stacking and combine it with provenance information to ensemble KBP systems. Our stacked meta-classifier provides an F1 score of 50.1% on 2014 KBP ESF, outperforming the best ESF and SFV systems from the 2014 competition, and thereby achieving a new state-of-the-art for this task. We found that provenance features increased accuracy, highlighting the importance of provenance information (even without accessing the source corpus) in addition to confidence scores for ensembling information extraction systems. 8 Acknowledgements We thank the anonymous reviewers for their valuable feedback. This research was supported by the DARPA DEFT program under AFRL grant FA8750-13-2-0026. References Gabor Angeli, Arun Chaganty, Angel Chang, Kevin Reschke, Julie Tibshirani, Jean Y Wu, Osbert Bastani, Keith Siilats, and Christopher D Manning. 2013. Stanford’s 2013 KBP system. In Proceedings of the Sixth Text Analysis Conference (TAC2013). Michele Banko, Oren Etzioni, and Turing Center. 2008. The tradeoffs between open and traditional 185 relation extraction. In ACL08, volume 8, pages 28– 36. Xiao Cheng, Bingling Chen, Rajhans Samdani, KaiWei Chang, Zhiye Fei, Mark Sammons, John Wieting, Subhro Roy, Chizheng Wang, and Dan Roth. 2013. Illinois cognitive computation group UI-CCG TAC 2013 entity linking and slot filler validation systems. In Proceedings of the Sixth Text Analysis Conference (TAC2013). T. Dietterich. 2000. Ensemble methods in machine learning. In J. Kittler and F. Roli, editors, First International Workshop on Multiple Classifier Systems, Lecture Notes in Computer Science, pages 1– 15. Springer-Verlag. Pedro Domingos, Stanley Kok, Daniel Lowd, Hoifung Poon, Matthew Richardson, and Parag Singla. 2008. Markov logic. In Luc De Raedt, Paolo Frasconi, Kristian Kersting, and Stephen Muggleton, editors, Probabilistic Inductive Logic Programming, volume 4911 of Lecture Notes in Computer Science, pages 92–117. Springer. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 601–610. ACM. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research, 9:1871–1874. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The WEKA data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10– 18. John C. Henderson and Eric Brill. 1999. Exploiting diversity in natural language processing: Combining parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-99), pages 187–194, College Park, MD. Yu Hong, Xiaobin Wang, Yadong Chen, Jian Wang, Tongtao Zhang, Jin Zheng, Dian Yu, Qi Li, Boliang Zhang, Han Wang, et al. 2014. RPI BLENDER TAC-KBP2014 knowledge base population system. Proceedings of the Seventh Text Analysis Conference (TAC2014). Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Ted Pedersen. 2000. A simple approach to building ensembles of naive Bayesian classifiers for word sense disambiguation. In Proceedings of the Meeting of the North American Association for Computational Linguistics, pages 63–69. John C. Platt. 1999. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Peter J. Bartlett, Bernhard Sch¨olkopf, Dale Schuurmans, and Alex J. Smola, editors, Advances in Large Margin Classifiers, pages 61–74. MIT Press, Boston. Lev Reyzin and Robert E Schapire. 2006. How boosting the margin can also boost classifier complexity. In Proceedings of the 23rd International Conference on Machine Learning, pages 753–760. ACM. Benjamin Roth and Dietrich Klakow. 2010. Crosslanguage retrieval using link-based language models. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 773–774. ACM. Benjamin Roth, Tassilo Barth, Michael Wiegand, et al. 2013. Effective slot filling based on shallow distant supervision methods. Proceedings of the Seventh Text Analysis Conference (TAC2013). Mark Sammons, Yangqiu Song, Ruichen Wang, Gourab Kundu, et al. 2014. Overview of UI-CCG systems for event argument extraction, entity discovery and linking, and slot filler validation. Proceedings of the Seventh Text Analysis Conference (TAC2014). Georgios Sigletos, Georgios Paliouras, Constantine D Spyropoulos, and Michalis Hatzopoulos. 2005. Combining information extraction systems using voting and stacked generalization. The Journal of Machine Learning Research, 6:1751–1782. Joseph Sill, G´abor Tak´acs, Lester Mackey, and David Lin. 2009. Feature-weighted linear stacking. arXiv preprint arXiv:0911.0460. Sameer Singh, Limin Yao, David Belanger, Ariel Kobren, Sam Anzaroot, Michael Wick, Alexandre Passos, Harshal Pandya, Jinho Choi, Brian Martin, and Andrew McCallum. 2013. Universal schema for slot filling and cold start: UMass IESL. Mihai Surdeanu and Heng Ji. 2014. Overview of the English slot filling track at the TAC2014 Knowledge Base Population Evaluation. In Proceedings of the Seventh Text Analysis Conference (TAC2014). Mihai Surdeanu. 2013. Overview of the TAC2013 knowledge base population evaluation: English slot filling and temporal slot filling. In Proceedings of the Sixth Text Analysis Conference (TAC 2013). I-Jeng Wang, Edwina Liu, Cash Costello, and Christine Piatko. 2013. JHUAPL TAC-KBP2013 slot filler validation system. In Proceedings of the Sixth Text Analysis Conference (TAC2013). 186 Anurag Wazalwar, Tushar Khot, Ce Zhang, Chris Re, Jude Shavlik, and Sriraam Natarajan. 2014. TAC KBP 2014 : English slot filling track DeepDive with expert advice. In Proceedings of the Seventh Text Analysis Conference (TAC2014). Matthew Whitehead and Larry Yaeger. 2010. Sentiment mining using ensemble classification models. In Tarek Sobh, editor, Innovations and Advances in Computer Sciences and Engineering. Springer Verlag, Berlin. David H. Wolpert. 1992. Stacked generalization. Neural Networks, 5:241–259. Xiaoxin Yin, Jiawei Han, and Philip S Yu. 2008. Truth discovery with multiple conflicting information providers on the web. Knowledge and Data Engineering, IEEE Transactions on, 20(6):796–808. Dian Yu, Hongzhao Huang, Taylor Cassidy, Heng Ji, Chi Wang, Shi Zhi, Jiawei Han, Clare Voss, and Malik Magdon-Ismail. 2014. The wisdom of minority: Unsupervised slot filling validation based on multidimensional truth-finding. In Proc. The 25th International Conference on Computational Linguistics (COLING2014). 187
2015
18
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 188–197, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Generative Event Schema Induction with Entity Disambiguation Kiem-Hieu Nguyen1, 2 Xavier Tannier3, 1 Olivier Ferret2 Romaric Besanc¸on2 (1) LIMSI-CNRS (2) CEA, LIST, Laboratoire Vision et Ingnierie des Contenus, F-91191, Gif-sur-Yvette (3) Univ. Paris-Sud {nguyen,xtannier}@limsi.fr, {olivier.ferret,romaric.besancon}@cea.fr Abstract This paper presents a generative model to event schema induction. Previous methods in the literature only use head words to represent entities. However, elements other than head words contain useful information. For instance, an armed man is more discriminative than man. Our model takes into account this information and precisely represents it using probabilistic topic distributions. We illustrate that such information plays an important role in parameter estimation. Mostly, it makes topic distributions more coherent and more discriminative. Experimental results on benchmark dataset empirically confirm this enhancement. 1 Introduction Information Extraction was initially defined (and is still defined) by the MUC evaluations (Grishman and Sundheim, 1996) and more specifically by the task of template filling. The objective of this task is to assign event roles to individual textual mentions. A template defines a specific type of events (e.g. earthquakes), associated with semantic roles (or slots) hold by entities (for earthquakes, their location, date, magnitude and the damages they caused (Jean-Louis et al., 2011)). Schema induction is the task of learning these templates with no supervision from unlabeled text. We focus here on event schema induction and continue the trend of generative models proposed earlier for this task. The idea is to group together entities corresponding to the same role in an event template based on the similarity of the relations that these entities hold with predicates. For example, in a corpus about terrorist attacks, entities that are objects of verbs to kill, to attack can be grouped together and characterized by a role named VICTIM. The output of this identification operation is a set of clusters of which members are both words and relations, associated with their probability (see an example later in Figure 4). These clusters are not labeled but each of them represents an event slot. Our approach here is to improve this initial idea by entity disambiguation. Some ambiguous entities, such as man or soldier, can match two different slots (victim or perpetrator). An entity such as terrorist can be mixed up with victims when articles relate that a terrorist has been killed by police (and thus is object of to kill). Our hypothesis is that the immediate context of entities is helpful for disambiguating them. For example, the fact that man is associated with armed, dangerous, heroic or innocent can lead to a better attribution and definition of roles. We then introduce relations between entities and their attributes in the model by means of syntactic relations. The document level, which is generally a center notion in topic modeling, is not used in our generative model. This results in a simpler, more intuitive model, where observations are generated from slots, that are defined by probabilistic distributions on entities, predicates and syntactic attributes. This model offers room for further extensions since multiple observations on an entity can be represented in the same manner. Model parameters are estimated by Gibbs sampling. We evaluate the performance of this approach by an automatic and empiric mapping between slots from the system and slots from the reference in a way similar to previous work in the domain. The rest of this paper is organized as follows: Section 2 briefly presents previous work; in Section 3, we detail our entity and relation representation; we describe our generative model in Section 4, before presenting our experiments and evaluations in Section 5. 188 2 Related Work Despite efforts made for making template filling as generic as possible, it still depends heavily on the type of events. Mixing generic processes with a restrictive number of domainspecific rules (Freedman et al., 2011) or examples (Grishman and He, 2014) is a way to reduce the amount of effort needed for adapting a system to another domain. The approaches of Ondemand information extraction (Hasegawa et al., 2004; Sekine, 2006) and Preemptive Information Extraction (Shinyama and Sekine, 2006) tried to overcome this difficulty in another way by exploiting templates induced from representative documents selected by queries. Event schema induction takes root in work on the acquisition from text of knowledge structures, such as the Memory Organization Packets (Schank, 1980), used by early text understanding systems (DeJong, 1982) and more recently by Ferret and Grau (1997). First attempts for applying such processes to schema induction have been made in the fields of Information Extraction (Collier, 1998), Automatic Summarization (Harabagiu, 2004) and event QuestionAnswering (Filatova et al., 2006; Filatova, 2008). More recently, work after (Hasegawa et al., 2004) has developed weakly supervised forms of Information Extraction including schema induction in their objectives. However, they have been mainly applied to binary relation extraction in practice (Eichler et al., 2008; Rosenfeld and Feldman, 2007; Min et al., 2012). In parallel, several approaches were proposed for performing specifically schema induction in already existing frameworks: clause graph clustering (Qiu et al., 2008), event sequence alignment (Regneri et al., 2010) or LDA-based approach relying on FrameNet-like semantic frames (Bejan, 2008). More event-specific generative models were proposed by Chambers (2013) and Cheung et al. (2013). Finally, Chambers and Jurafsky (2008), Chambers and Jurafsky (2009), Chambers and Jurafsky (2011), improved by Balasubramanian et al. (2013), and Chambers (2013) focused specifically on the induction of event roles and the identification of chains of events for building representations from texts by exploiting coreference resolution or the temporal ordering of events. All this work is also linked to work about the induction of scripts from texts, more or less closely linked to Attributes Head Triggers #1 [armed:amod] man [attack:nsubj, kill:nsubj] #2 [police:nn] station [attack:dobj] #3 [] policeman [kill:dobj] #4 [innocent:amod, man [wound:dobj] young:amod] Figure 1: Entity representation as tuples of ([attributes], head, [triggers]). events, such as (Frermann et al., 2014), (Pichotta and Mooney, 2014) or (Modi and Titov, 2014). The work we present in this article is in line with Chambers (2013), which will be described in more details in Section 5, together with a quantitative and qualitative comparison. 3 Entity Representation An entity is represented as a triple containing: a head word h, a list A of attribute relations and a list T of trigger relations. Consider the following example: (1) Two armed men attacked the police station and killed a policeman. An innocent young man was also wounded. As illustrated in Figure 1, four entities, equivalent to four separated triples, are generated from the text above. Head words are extracted from noun phrases. A trigger relation is composed of a predicate (attack, kill, wound) and a dependency type (subject, object). An attribute relation is composed of an argument (armed, police, young) and a dependency type (adjectival, nominal or verbal modifier). In the relationship to triggers, a head word is argument, but in the relationship to attributes, it is predicate. We use Stanford NLP toolkit (Manning et al., 2014) for parsing and coreference resolution. A head word is extracted if it is a nominal or proper noun and it is related to at least one trigger; pronouns are omitted. A trigger of an head word is extracted if it is a verb or an eventive noun and the head word serves as its subject, object, or preposition. We use the categories noun.EVENT and noun.ACT in WordNet as a list of eventive nouns. A head word can have more than one trigger. These multiple relations can come from a syntactic coordination inside a single sentence, as it is the case in the first sentence of the illustrating example. They can also represent a coreference 189 h t π φ uni(1,K) #tuples a θ s dir(α) dir(β) dir(γ) A T Figure 2: Generative model for event induction. chain across sentences, as we use coreference resolution to merge the triggers of mentions corefering to the same entity in a document. Coreferences are useful sources for event induction (Chambers and Jurafsky, 2011; Chambers, 2013). Finally, an attribute is extracted if it is an adjective, a noun or a verb and serves as an adjective, verbal or nominal modifier of a head word. If there are several modifiers, only the closest to the head word is selected. This “best selection” heuristic allows to omit non-discriminative attributes for the entity. 4 Generative Model 4.1 Model Description Figure 2 shows the plate notation of our model. For each triple representing an entity e, the model first assigns a slot s for the entity from an uniform distribution uni(1, K). Its head word h is then generated from a multinominal distribution πs. Each ti of event trigger relations Te is generated from a multinominal distribution φs. Each aj of attribute relations Ae is similarly generated from a multinominal distribution θs. The distributions θ, π, and φ are generated from Dirichlet priors dir(α), dir(β) and dir(γ) respectively. Given a set of entities E, our model (π, φ, θ) is defined by Pπ,φ,θ(E) = Y e∈E Pπ,φ,θ(e) (2) where the probability of each entity e is defined by Pπ,φ,θ(e) = P(s) × P(h|s) × Y t∈Te P(t|s) × Y a∈Ae P(a|s) (3) The generative story is as follows: for slot s ←1 to K do Generate an attribute distribution θs from a Dirichlet prior dir(α); Generate a head distribution πs from a Dirichlet prior dir(β); Generate a trigger distribution φs from a Dirichlet prior dir(γ); end for entity e ∈E do Generate a slot s from a uniform distribution uni(1, K); Generate a head h from a multinominal distribution πs; for i ←1 to |Te| do Generate a trigger ti from a multinominal distribution φs; end for j ←1 to |Ae| do Generate an attribute aj from a multinominal distribution φs; end end 4.2 Parameter Estimation For parameter estimation, we use the Gibbs sampling method (Griffiths, 2002). The slot variable s is sampled by integrating out all the other variables. Previous models (Cheung et al., 2013; Chambers, 2013) are based on document-level topic modeling, which originated from models such as Latent Dirichlet Allocation (Blei et al., 2003). Our model is, instead, independent from document contexts. Its input is a sequence of entity triples. Document boundary is only used in a postprocessing step of filtering (see Section 5.3 for more details). There is a universal slot distribution instead of each slot distribution for one document. Furthermore, slot prior is ignored by using a uniform distribution as a particular case of categorical probability. Sampling-based slot assignment could depend on initial states and random seeds. In our implementation of Gibbs sampling, we use 2,000 burn-in of overall 10,000 iterations. The purpose of burn-in is to assure that parameters converge to a stable state before estimating the probability distributions. Moreover, an interval step of 100 is applied between consecutive samples in order to avoid too strong coherence. Particularly, for tracking changes in probabilities resulting from attribute relations, we ran in the first stage a specific burn-in with only heads and trigger relations. This stable state was then used as initialization for the second burn-in in 190 0.0005 0.001 0.0015 0.002 0.0025 0.003 0.0035 0.004 10 20 30 40 50 60 70 80 90 100 P(terrorist|ATTACKvictim) BURN_IN iterations (x20) Using attributes No attribute (a) P(terrorist|ATTACK victim) 0 0.005 0.01 0.015 0.02 0.025 0.03 0.035 0.04 10 20 30 40 50 60 70 80 90 100 P(terrorist|ATTACKperpetrator) BURN_IN iterations (x20) Using attributes No attribute (b) P(terrorist|ATTACK perp) 0 0.05 0.1 0.15 0.2 0.25 0.3 10 20 30 40 50 60 70 80 90 100 P(kill:dobj|ATTACKvictim) BURN_IN iterations (x20) Using attributes No attribute (c) P(kill : dobj|ATTACK victim) 0 0.005 0.01 0.015 0.02 10 20 30 40 50 60 70 80 90 100 P(kill:dobj|ATTACKperpetrator) BURN_IN iterations (x20) Using attributes No attribute (d) P(kill : dobj|ATTACK perp) Figure 3: Probability convergence when using attributes in sampling. The use of attributes is started at point 50 (i.e., 50% of burn-in phase). The dotted line shows convergence without attributes; the continuous line shows convergence with attributes. which attributes, heads, and triggers were used altogether. This specific experimental setting made us understand how the attributes modified distributions. We observed that non-ambiguous words or relations (i.e. explode, murder:nsubj) were only slightly modified whereas probabilities of ambiguous words such as man, soldier or triggers such as kill:dobj or attack:nsubj converged smoothly to a different stable state that was semantically more coherent. For instance, the model interestingly realized that even if a terrorist was killed (e.g. by police), he was not actually a real victim of an attack. Figure 3 shows probability convergences of terrorist and kill:dobj given ATTACK victim and ATTACK perpetrator. 5 Evaluations In order to compare with related work, we evaluated our method on the Message Understanding Conference (MUC-4) corpus (Sundheim, 1991) using precision, recall and F-score as conventional metrics for template extraction. In what follows, we first introduce the MUC4 corpus (Section 5.1.1), we detail the mapping technique between learned slots and reference slots (5.1.2) as well as the hyper-parameters of our model (5.1.3). Next, we present a first experiment (Section 5.2) showing how using attribute relations improves overall results. The second experiment (Section 5.3) studies the impact of document classification. We then compare our results with previous approaches, more particularly with Chambers (2013), from both quantitative and qualitative points of view (Section 5.4). Finally, Section 5.5 is dedicated to error analysis, with a special emphasis on sources of false positives. 5.1 Experimental Setups 5.1.1 Datasets The MUC-4 corpus contains 1,700 news articles about terrorist incidents happening in Latin America. The corpus is divided into 1,300 documents 191 for the development set and four test sets, each containing 100 documents. We follow the rules in the literature to guarantee comparable results (Patwardhan and Riloff, 2007; Chambers and Jurafsky, 2011). The evaluation focuses on four template types – ARSON, ATTACK, BOMBING, KIDNAPPING – and four slots – Perpetrator, Instrument, Target, and Victim. Perpetrator is merged from Perpetrator Individual and Perpetrator Organization. The matching between system answers and references is based on head word matching. A head word is defined as the rightmost word of the phrase or as the right-most word of the first ‘of’ if the phrase contains any. Optional templates and slots are ignored when calculating recall. Template types are ignored in evaluation: this means that a perpetrator of BOMBING in the answers could be compared to a perpetrator of ARSON, ATTACK, BOMBING or KIDNAPPING in the reference. 5.1.2 Slot Mapping The model learns K slots and assigns each entity in a document to one of the learned slots. Slot mapping consists in matching each reference slot to an equivalent learned slot. Note that among the K learned slots, some are irrelevant while others, sometimes of high quality, contain entities that are not part of the reference (spatio-temporal information, protagonist context, etc.). For this reason, it makes sense to have much more learned slots than expected event slots. Similarly to previous work in the literature, we implemented an automatic empirical-driven slot mapping. Each reference slot was mapped to the learned slot that performed the best on the task of template extraction according to the Fscore metric. Here, two identical slots of two different templates, such as ATTACK victim and KIDNAPPING victim, must to be mapped separately. Figure 4 shows the most common words of two learned slots which were mapped to BOMBING instrument and KIDNAPPING victim. This mapping is then kept for testing. 5.1.3 Parameter Tuning We first tuned hyper-parameters of the models on the development set. The number of slots was set to K = 35. Dirichlet priors were set to α = 0.1, β = 1 and γ = 0.1. The model was learned from the whole dataset. Slot mapping was done on tst1 and tst2. Outputs from tst3 and tst4 were evalBOMBING instrument Attributes Heads Triggers car:nn bomb explode:nsubj powerful:amod fire hear:dobj explosive:amod explosion place:dobj dynamite:nn blow cause:nsubj heavy:amod charge set:dobj KIDNAPPING victim Attributes Heads Triggers several:amod people arrest:dobj other:amod person kidnap:dobj responsible:amod man release:dobj military:amod member kill:dobj young:amod leader identify:prep as Figure 4: Attribute, head and trigger distributions learned by the model HT+A for learned slots that were mapped to BOMBING instrument and KIDNAPPING victim. uated using references and were averaged across ten runs. 5.2 Experiment 1: Using Entity Attributes In this experiment, two versions of our model are compared: HT+A uses entity heads, event trigger relations and entity attribute relations. HT uses only entity heads and event triggers and omits attributes. We studied the gain brought by attribute relations with a focus on their effect when coreference information was available or was missing. The variations on the model input are named single, multi and coref. Single input has only one event trigger for each entity. A text like an armed man attacked the police station and killed a policeman results in two triples for the entity man: (armed:amod, man, attack:nsubj) and (armed:amod, man, kill:nsubj). In multi input, one entity can have several event triggers, leading for the text above to the triple (armed:amod, man, [attack:nsubj, kill:nsubj]). The coref input is richer than multi in that, in addition to triggers from the same sentence, triggers linked to the same corefered entity are merged together. For instance, if man in the above example corefers with he in He was arrested three hours later, the merged triple becomes (armed:amod, man, [attack:nsubj, kill:nsubj, arrest:dobj]). The plate notations of these model+data combinations are given in Figure 5. Table 1 shows a consistent improvement when using attributes, both with and without coreferences. The best performance of 40.62 F-score is obtained by the full model on inputs with coref192 h t π φ uni(1,K) #tuples s (a) h t π φ uni(1,K) #tuples s T (b) h t π φ uni(1,K) #tuples a θ s A (c) h t π φ uni(1,K) #tuples a θ s A T (d) Figure 5: Model variants (Dirichlet priors are omitted for simplicity): 5a) HT model ran on single data. This model is equivalent to 5b) with T=1; 5b) HT model ran on multi data; 5c) HT+A model ran on single data; 5d) HT+A model ran on multi data. Data HT HT+A P R F P R F Single 29.59 51.17 37.48 30.22 52.41 38.33 Multi 29.32 52.21 37.52 30.82 51.68 38.55 Coref 39.99 53.53 40.01 32.42 54.59 40.62 Table 1: Improvement from using attributes. erences. Using both attributes in the model and coreference to generate input data results in a gain of 3 F-score points. 5.3 Experiment 2: Document Classification In the second experiment, we evaluated our model with a post-processing step of document classification. The MUC-4 corpus contains many “irrelevant” documents. A document is irrelevant if it contains no template. Among 1,300 documents in the development set, 567 are irrelevant. The most challenging part is that there are many terrorist entities, e.g. bomb, force, guerrilla, occurring in irrelevant documents. That makes filtering out those documents important, but difficult. As document classification is not explicitly performed by our model, a post-processing step is needed. Document classification is expected to reduce false positives in irrelevant documents while not dramatically reducing recall. Given a document d with slot-assigned entities and a set of mapped slots Sm resulting from slot mapping, we have to decide whether this document is relevant or not. We define the relevance score of a document as: relevance(d) = P e∈d:se∈Sm P t∈Te P(t|se) P e∈d P t∈Te P(t|se) (4) where e is an entity in the document d; se is the slot value assigned to e; and t is an event trigger in the list of triggers Te. The equation (4) defines the score of an entity as the sum of the conditional probabilities of triggers given a slot. The relevance score of the document is proportional to the score of the entities assigned to mapped slots. If this relevance score is higher than a threshold λ, then the document is considered as relevant. The value of λ = 0.02 was tuned 193 System P R F HT+A 32.42 54.59 40.62 HT+A + doc. classification 35.57 53.89 42.79 HT+A + oracle classification 44.58 54.59 49.08 Table 2: Improvement from document classification as post-processing. on the development set by maximizing the F-score of document classification. Table 2 shows the improvement when applying document classification. The precision increases as false positives from irrelevant documents are filtered out. The loss of recall comes from relevant documents that are mistakenly filtered out. However, this loss is not significant and the overall Fscore finally increases by 5%. We also compare our results to an “oracle” classifier that would remove all irrelevant documents while preserving all relevant ones. The performance of this oracle classification shows that there are some room for further improvement from document classification. Irrelevant document filtering is a technique applied by most supervised and unsupervised approaches. Supervised methods prefer relevance detection at sentence or phrase-level (Patwardhan and Riloff, 2009; Patwardhan and Riloff, 2007). As for several unsupervised methods, Chambers (2013) includes document classification in his topic model. Chambers and Jurafsky (2011) and Cheung et al. (2013) use the learned clusters to classify documents by estimating the relevance of a document with respect to a template from posthoc statistics about event triggers. 5.4 Comparison to State-of-the-Art For comparing in more depth our results to the state-of-the-art in the literature. we reimplemented the method proposed in Chambers (2013) and integrated our attribute distributions into his model (as shown in Figure 6). The main differences between this model and ours are the following: 1. The full template model of Chambers (2013) adds a distribution ψ linking events to documents. This makes the model more complex and maybe less intuitive since there is no reason to connect documents and slots (a document may contain references to several templates and slot mapping does not depend on document level). A benefit of this document System P R F Cheung et al. (2013) 32 37 34 Chambers and Jurafsky (2011) 48 25 33 Chambers (2013) (paper values) 41 41 41 HT+A + doc. classification 36 54 43 Table 3: Comparison to state-of-the-art unsupervised systems. distribution is that it leads to a free classification of irrelevant documents, thus avoiding a pre- or post-processing for classification. However, this issue of document relevance is very specific to the MUC corpus and the evaluation method; In a more general use case, there would be no “irrelevant” documents, only documents on various topics. 2. Each entity is linked to an event variable e. This event generates a predicate for each entity mention (recall that mentions of an entity are all occurrences of this entity in the documents, for example in a coreference chain). Our work instead focus on the fact that a probabilistic model could have multiple observations at the same position. Multiple triggers and multiple attributes are treated equally. The sources of multiple attributes and multiple triggers are not only from document-level coreferences but also from dependency relations (or even from domain-level entity coreferences if available). Hence, our model arguably generalizes better in terms of both modeling and input data. 3. Chambers (2013) applies a heuristic constraint during the sampling process, imposing that subject and object of the same predicate (e.g. kill:nsubj and kill:dobj) are not distributed into the same slot. Our model does not require this heuristic. Some details concerning data preprocessing and model parameters are not fully specified by Chambers (2013); for this reason, our implementation of the model (applied on the same data) leads to slightly different results than those published. That is why we present the two results here (paper values in Table 3, reimplementation values in Table 4). Table 3 shows that our model outperforms the others on recall by a large margin. It achieves the 194 h t π φ ψ #tuples s M e p #docs τ (a) h t π φ ψ #tuples a θ s A M e p #docs τ (b) Figure 6: Variation of Chambers (2013) model: 6a) Original model; 6b) Original model + attribute distributions. Chambers (2013) P R F Original reimpl. 38.65 42.68 40.56 Original reimpl. + Attribute 39.25 43.68 41.31 Table 4: Performance on reimplementation of Chambers (2013). best overall F-score. In addition, as stated by our experiments, precision could be further improved by more sophisticated document classification. Interestingly, using attributes also proves to be useful in the model proposed by Chambers (2013) (as shown in Table 4). 5.5 Error Analysis We performed an error analysis on the output of HT+A + doc. classification to detect the origin of false positives (FPs). 38% of FPs are mentions that never occur in the reference. Within this 38%, attacker and killer are among the most frequent errors. These words could refer to a perpetrator of an attack. These mentions, however, do not occur in the reference, possibly because human annotators consider them as too generic terms. Apart from such generic terms, other assignments are obvious errors of the system, e.g. window, door or wall as physical target; action or massacre as perpetrator; explosion or shooting as instrument. These kinds of errors are due to the fact that in our model, as in the one of Chambers (2013), the number of slots is fixed and is not equivalent to the real number of reference slots. On the other hand, 62% of FPs are mentions of entities that occur at least once in the reference. On top of the list are perpetrators such as guerrilla, group and rebel. The model is capable of assigning guerrilla to attribution slot if it is accompanied by a trigger like announce:nsubj. However, triggers that describe quasi-terrorism events (e.g. menace, threatening, military conflict) are also grouped into perpetrator slots. Similarly, mentions of frequent words such as bomb (instrument), building, house, office (targets) tend to be systematically grouped into these slots, regardless of their relations. Increasing the number of slots (to sharpen their content) does not help overall. This is due to the fact that the MUC corpus is very small and is biased towards terrorism events. Adding a higher level of template type as in Chambers (2013) partially solves the problem but makes recall decrease (as shown in Table 3). 6 Conclusions and Perspectives We presented a generative model for representing the roles played by the entities in an event template. We focused on using immediate contexts of entities and proposed a simpler and more effective model than those proposed in previous work. We evaluated this model on the MUC-4 corpus. Even if our results outperform other unsupervised approaches, we are still far from results obtained by supervised systems. Improvements can be obtained by several ways. First, the characteristics of the MUC-4 corpus are a limiting factor. The corpus is small and roles are similar from a template to another, which does not reflect reality. 195 A bigger corpus, even partially annotated but presenting a better variety of templates, could lead to very different approaches. As we showed, our model comes with a unified representation of all types of relations. This opens the way to the use of multiple types of relations (syntactic, semantic, thematic, etc.) to refine the clusters. Last but not least, the evaluation protocol, that became a kind of de facto standard, is very much imperfect. Most notably, the way of finally mapping with reference slots can have a great influence on the results. Acknowledgment This work was partially financed by the Foundation for Scientific Cooperation “Campus ParisSaclay” (FSC) under the project Digiteo ASTRE No. 2013-0774D. References Niranjan Balasubramanian, Stephen Soderland, Mausam, and Oren Etzioni. 2013. Generating Coherent Event Schemas at Scale. In 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP 2013), pages 1721–1731, Seattle, Washington, USA, October. Cosmin Adrian Bejan. 2008. Unsupervised Discovery of Event Scenarios from Texts. In Twenty-First International Florida Artificial Intelligence Research Society Conference (FLAIRS 2008), pages 124–129, Coconut Grove, Florida. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet Allocation. Journal of Machine Learning Research, 3:993–1022, March. Nathanael Chambers and Dan Jurafsky. 2008. Unsupervised Learning of Narrative Event Chains. In ACL-08: HLT, pages 789–797, Columbus, Ohio, June. Nathanael Chambers and Dan Jurafsky. 2009. Unsupervised Learning of Narrative Schemas and their Participants. In Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP (ACL-IJCNLP’09), pages 602–610, Suntec, Singapore, August. Nathanael Chambers and Dan Jurafsky. 2011. Template-Based Information Extraction without the Templates. In 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies (ACL 2011), pages 976–986, Portland, Oregon, USA, June. Nathanael Chambers. 2013. Event Schema Induction with a Probabilistic Entity-Driven Model. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1797– 1807, Seattle, Washington, USA, October. Kit Jackie Chi Cheung, Hoifung Poon, and Lucy Vanderwende. 2013. Probabilistic Frame Induction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 837–846. R. Collier. 1998. Automatic Template Creation for Information Extraction. Ph.D. thesis, University of Sheffield. Gerald DeJong. 1982. An overview of the FRUMP system. In W. Lehnert and M. Ringle, editors, Strategies for natural language processing, pages 149–176. Lawrence Erlbaum Associates. Kathrin Eichler, Holmer Hemsen, and G¨unter Neumann. 2008. Unsupervised Relation Extraction From Web Documents. In 6th Conference on Language Resources and Evaluation (LREC’08), Marrakech, Morocco. Olivier Ferret and Brigitte Grau. 1997. An Aggregation Procedure for Building Episodic Memory. In 15th International Joint Conference on Artificial Intelligence (IJCAI-97), pages 280–285, Nagoya, Japan. Elena Filatova, Vasileios Hatzivassiloglou, and Kathleen McKeown. 2006. Automatic Creation of Domain Templates. In 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 207–214, Sydney, Australia. Elena Filatova. 2008. Unsupervised Relation Learning for Event-Focused Question-Answering and Domain Modelling. Ph.D. thesis, Columbia University. Marjorie Freedman, Lance Ramshaw, Elizabeth Boschee, Ryan Gabbard, Gary Kratkiewicz, Nicolas Ward, and Ralph Weischedel. 2011. Extreme Extraction – Machine Reading in a Week. In 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP 2011), pages 1437– 1446, Edinburgh, Scotland, UK., July. Lea Frermann, Ivan Titov, and Manfred Pinkal. 2014. A Hierarchical Bayesian Model for Unsupervised Induction of Script Knowledge. In 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 49–57, Gothenburg, Sweden, April. Tom Griffiths. 2002. Gibbs sampling in the generative model of Latent Dirichlet Allocation. Technical report, Stanford University. 196 Ralph Grishman and Yifan He. 2014. An Information Extraction Customizer. In Petr Sojka, Ale Hork, Ivan Kopeek, and Karel Pala, editors, 17th International Conference on Text, Speech and Dialogue (TSD 2014), volume 8655 of Lecture Notes in Computer Science, pages 3–10. Springer International Publishing. Ralph Grishman and Beth Sundheim. 1996. Message Understanding Conference-6: A Brief History. In 16th International Conference on Computational linguistics (COLING’96), pages 466–471, Copenhagen, Denmark. Sanda Harabagiu. 2004. Incremental Topic Representation. In Proceedings of the 20th International Conference on Computational Linguistics (COLING’04), Geneva, Switzerland, August. Takaaki Hasegawa, Satoshi Sekine, and Ralph Grishman. 2004. Discovering Relations among Named Entities from Large Corpora. In 42nd Meeting of the Association for Computational Linguistics (ACL’04), pages 415–422, Barcelona, Spain. Ludovic Jean-Louis, Romaric Besanon, and Olivier Ferret. 2011. Text Segmentation and Graph-based Method for Template Filling in Information Extraction. In 5th International Joint Conference on Natural Language Processing (IJCNLP 2011), pages 723–731, Chiang Mai, Thailand. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP Natural Language Processing Toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60, Baltimore, USA, jun. Bonan Min, Shuming Shi, Ralph Grishman, and ChinYew Lin. 2012. Ensemble Semantics for Largescale Unsupervised Relation Extraction. In 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012, pages 1027–1037, Jeju Island, Korea. Ashutosh Modi and Ivan Titov. 2014. Inducing neural models of script knowledge. In Eighteenth Conference on Computational Natural Language Learning (CoNLL 2014), pages 49–57, Ann Arbor, Michigan. Siddharth Patwardhan and Ellen Riloff. 2007. Effective Information Extraction with Semantic Affinity Patterns and Relevant Regions. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2007), pages 717–727, Prague, Czech Republic, June. Siddharth Patwardhan and Ellen Riloff. 2009. A Unified Model of Phrasal and Sentential Evidence for Information Extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP 2009), pages 151–160. Karl Pichotta and Raymond Mooney. 2014. Statistical script learning with multi-argument events. In 14th Conference of the European Chapter of the Association for Computational Linguistics (EACL 2014), pages 220–229, Gothenburg, Sweden. Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2008. Modeling Context in Scenario Template Creation. In Third International Joint Conference on Natural Language Processing (IJCNLP 2008), pages 157– 164, Hyderabad, India. Michaela Regneri, Alexander Koller, and Manfred Pinkal. 2010. Learning Script Knowledge with Web Experiments. In 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 979–988, Uppsala, Sweden, July. Benjamin Rosenfeld and Ronen Feldman. 2007. Clustering for unsupervised relation identification. In Sixteenth ACM conference on Conference on information and knowledge management (CIKM’07), pages 411–418, Lisbon, Portugal. Roger C. Schank. 1980. Language and memory. Cognitive Science, 4:243–284. Satoshi Sekine. 2006. On-demand information extraction. In 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL 2006), pages 731–738, Sydney, Australia. Yusuke Shinyama and Satoshi Sekine. 2006. Preemptive Information Extraction using Unrestricted Relation Discovery. In HLT-NAACL 2006, pages 304– 311, New York City, USA. Beth M. Sundheim. 1991. Third Message Understanding Evaluation and Conference (MUC-3): Phase 1 Status Report. In Proceedings of the Workshop on Speech and Natural Language, HLT ’91, pages 301– 305. 197
2015
19
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 11–19, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Addressing the Rare Word Problem in Neural Machine Translation Minh-Thang Luong† ∗ Stanford [email protected] Ilya Sutskever† Google Quoc V. Le† Google {ilyasu,qvl,vinyals}@google.com Oriol Vinyals Google Wojciech Zaremba∗ New York University [email protected] Abstract Neural Machine Translation (NMT) is a new approach to machine translation that has shown promising results that are comparable to traditional approaches. A significant weakness in conventional NMT systems is their inability to correctly translate very rare words: end-to-end NMTs tend to have relatively small vocabularies with a single unk symbol that represents every possible out-of-vocabulary (OOV) word. In this paper, we propose and implement an effective technique to address this problem. We train an NMT system on data that is augmented by the output of a word alignment algorithm, allowing the NMT system to emit, for each OOV word in the target sentence, the position of its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates every OOV word using a dictionary. Our experiments on the WMT’14 English to French translation task show that this method provides a substantial improvement of up to 2.8 BLEU points over an equivalent NMT system that does not use this technique. With 37.5 BLEU points, our NMT system is the first to surpass the best result achieved on a WMT’14 contest task. 1 Introduction Neural Machine Translation (NMT) is a novel approach to MT that has achieved promising results (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015; Jean et al., 2015). An NMT system is a conceptually simple large neural network that reads the en∗Work done while the authors were in Google. † indicates equal contribution. tire source sentence and produces an output translation one word at a time. NMT systems are appealing because they use minimal domain knowledge which makes them well-suited to any problem that can be formulated as mapping an input sequence to an output sequence (Sutskever et al., 2014). In addition, the natural ability of neural networks to generalize implies that NMT systems will also generalize to novel word phrases and sentences that do not occur in the training set. In addition, NMT systems potentially remove the need to store explicit phrase tables and language models which are used in conventional systems. Finally, the decoder of an NMT system is easy to implement, unlike the highly intricate decoders used by phrase-based systems (Koehn et al., 2003). Despite these advantages, conventional NMT systems are incapable of translating rare words because they have a fixed modest-sized vocabulary1 which forces them to use the unk symbol to represent the large number of out-of-vocabulary (OOV) words, as illustrated in Figure 1. Unsurprisingly, both Sutskever et al. (2014) and Bahdanau et al. (2015) have observed that sentences with many rare words tend to be translated much more poorly than sentences containing mainly frequent words. Standard phrase-based systems (Koehn et al., 2007; Chiang, 2007; Cer et al., 2010; Dyer et al., 2010), on the other hand, do not suffer from the rare word problem to the same extent because they can support a much larger vocabulary, and because their use of explicit alignments and phrase tables allows them to memorize the translations of even extremely rare words. Motivated by the strengths of standard phrase1Due to the computationally intensive nature of the softmax, NMT systems often limit their vocabularies to be the top 30K-80K most frequent words in each language. However, Jean et al. (2015) has very recently proposed an efficient approximation to the softmax that allows for training NTMs with very large vocabularies. As discussed in Section 2, this technique is complementary to ours. 11 en: The ecotax portico in Pont-de-Buis , . . . [truncated] . . . , was taken down on Thursday morning fr: Le portique ´ecotaxe de Pont-de-Buis , . . . [truncated] . . . , a ´et´e d´emont´e jeudi matin nn: Le unk de unk `a unk , . . . [truncated] . . . , a ´et´e pris le jeudi matin ✟✟✟✟ ❍ ❍ ❍ ❍ ❆ ❆ ❅ ❅✂✂ ✑✑✑ ✟✟✟✟ Figure 1: Example of the rare word problem – An English source sentence (en), a human translation to French (fr), and a translation produced by one of our neural network systems (nn) before handling OOV words. We highlight words that are unknown to our model. The token unk indicates an OOV word. We also show a few important alignments between the pair of sentences. based system, we propose and implement a novel approach to address the rare word problem of NMTs. Our approach annotates the training corpus with explicit alignment information that enables the NMT system to emit, for each OOV word, a “pointer” to its corresponding word in the source sentence. This information is later utilized in a post-processing step that translates the OOV words using a dictionary or with the identity translation, if no translation is found. Our experiments confirm that this approach is effective. On the English to French WMT’14 translation task, this approach provides an improvement of up to 2.8 (if the vocabulary is relatively small) BLEU points over an equivalent NMT system that does not use this technique. Moreover, our system is the first NMT that outperforms the winner of a WMT’14 task. 2 Neural Machine Translation A neural machine translation system is any neural network that maps a source sentence, s1, . . . , sn, to a target sentence, t1, . . . , tm, where all sentences are assumed to terminate with a special “end-of-sentence” token <eos>. More concretely, an NMT system uses a neural network to parameterize the conditional distributions p(tj|t<j, s≤n) (1) for 1 ≤j ≤m. By doing so, it becomes possible to compute and therefore maximize the log probability of the target sentence given the source sentence log p(t|s) = m X j=1 log p (tj|t<j, s≤n) (2) There are many ways to parameterize these conditional distributions. For example, Kalchbrenner and Blunsom (2013) used a combination of a convolutional neural network and a recurrent neural network, Sutskever et al. (2014) used a deep Long Short-Term Memory (LSTM) model, Cho et al. (2014) used an architecture similar to the LSTM, and Bahdanau et al. (2015) used a more elaborate neural network architecture that uses an attentional mechanism over the input sequence, similar to Graves (2013) and Graves et al. (2014). In this work, we use the model of Sutskever et al. (2014), which uses a deep LSTM to encode the input sequence and a separate deep LSTM to output the translation. The encoder reads the source sentence, one word at a time, and produces a large vector that represents the entire source sentence. The decoder is initialized with this vector and generates a translation, one word at a time, until it emits the end-of-sentence symbol <eos>. None the early work in neural machine translation systems has addressed the rare word problem, but the recent work of Jean et al. (2015) has tackled it with an efficient approximation to the softmax to accommodate for a very large vocabulary (500K words). However, even with a large vocabulary, the problem with rare words, e.g., names, numbers, etc., still persists, and Jean et al. (2015) found that using techniques similar to ours are beneficial and complementary to their approach. 3 Rare Word Models Despite the relatively large amount of work done on pure neural machine translation systems, there has been no work addressing the OOV problem in NMT systems, with the notable exception of Jean et al. (2015)’s work mentioned earlier. We propose to address the rare word problem by training the NMT system to track the origins of the unknown words in the target sentences. If we knew the source word responsible for each un12 en: The unk1 portico in unk2 . . . fr: Le unk∅unk1 de unk2 . . . Figure 2: Copyable Model – an annotated example with two types of unknown tokens: “copyable” unkn and null unk∅. known target word, we could introduce a postprocessing step that would replace each unk in the system’s output with a translation of its source word, using either a dictionary or the identity translation. For example, in Figure 1, if the model knows that the second unknown token in the NMT (line nn) originates from the source word ecotax, it can perform a word dictionary lookup to replace that unknown token by ´ecotaxe. Similarly, an identity translation of the source word Pont-de-Buis can be applied to the third unknown token. We present three annotation strategies that can easily be applied to any NMT system (Kalchbrenner and Blunsom, 2013; Sutskever et al., 2014; Cho et al., 2014). We treat the NMT system as a black box and train it on a corpus annotated by one of the models below. First, the alignments are produced with an unsupervised aligner. Next, we use the alignment links to construct a word dictionary that will be used for the word translations in the post-processing step.2 If a word does not appear in our dictionary, then we apply the identity translation. The first few words of the sentence pair in Figure 1 (lines en and fr) illustrate our models. 3.1 Copyable Model In this approach, we introduce multiple tokens to represent the various unknown words in the source and in the target language, as opposed to using only one unk token. We annotate the OOV words in the source sentence with unk1, unk2, unk3, in that order, while assigning repeating unknown words identical tokens. The annotation of the unknown words in the target language is slightly more elaborate: (a) each unknown target word that is aligned to an unknown source word is assigned the same unknown token (hence, the 2When a source word has multiple translations, we use the translation with the highest probability. These translation probabilities are estimated from the unsupervised alignment links. When constructing the dictionary from these alignment links, we add a word pair to the dictionary only if its alignment count exceeds 100. en: The unk portico in unk . . . fr: Le p0 unk p−1 unk p1 de p∅unk p−1 . . . Figure 3: Positional All Model – an example of the PosAll model. Each word is followed by the relative positional tokens pd or the null token p∅. “copy” model) and (b) an unknown target word that has no alignment or that is aligned with a known word uses the special null token unk∅. See Figure 2 for an example. This annotation enables us to translate every non-null unknown token. 3.2 Positional All Model (PosAll) The copyable model is limited by its inability to translate unknown target words that are aligned to known words in the source sentence, such as the pair of words, “portico” and “portique”, in our running example. The former word is known on the source sentence; whereas latter is not, so it is labelled with unk∅. This happens often since the source vocabularies of our models tend to be much larger than the target vocabulary since a large source vocabulary is cheap. This limitation motivated us to develop an annotation model that includes the complete alignments between the source and the target sentences, which is straightforward to obtain since the complete alignments are available at training time. Specifically, we return to using only a single universal unk token. However, on the target side, we insert a positional token pd after every word. Here, d indicates a relative position (d = −7, . . . , −1, 0, 1, . . . , 7) to denote that a target word at position j is aligned to a source word at position i = j −d. Aligned words that are too far apart are considered unaligned, and unaligned words rae annotated with a null token pn. Our annotation is illustrated in Figure 3. 3.3 Positional Unknown Model (PosUnk) The main weakness of the PosAll model is that it doubles the length of the target sentence. This makes learning more difficult and slows the speed of parameter updates by a factor of two. However, given that our post-processing step is concerned only with the alignments of the unknown words, so it is more sensible to only annotate the unknown words. This motivates our positional unknown model which uses unkposd tokens (for d in −7, . . . , 7 or ∅) to simultaneously denote (a) 13 the fact that a word is unknown and (b) its relative position d with respect to its aligned source word. Like the PosAll model, we use the symbol unkpos∅for unknown target words that do not have an alignment. We use the universal unk for all unknown tokens in the source language. See Figure 4 for an annotated example. en: The unk portico in unk . . . fr: Le unkpos1 unkpos−1 de unkpos1 . . . Figure 4: Positional Unknown Model – an example of the PosUnk model: only aligned unknown words are annotated with the unkposd tokens. It is possible that despite its slower speed, the PosAll model will learn better alignments because it is trained on many more examples of words and their alignments. However, we show that this is not the case (see §5.2). 4 Experiments We evaluate the effectiveness of our OOV models on the WMT’14 English-to-French translation task. Translation quality is measured with the BLEU metric (Papineni et al., 2002) on the newstest2014 test set (which has 3003 sentences). 4.1 Training Data To be comparable with the results reported by previous work on neural machine translation systems (Sutskever et al., 2014; Cho et al., 2014; Bahdanau et al., 2015), we train our models on the same training data of 12M parallel sentences (348M French and 304M English words), obtained from (Schwenk, 2014). The 12M subset was selected from the full WMT’14 parallel corpora using the method proposed in Axelrod et al. (2011). Due to the computationally intensive nature of the naive softmax, we limit the French vocabulary (the target language) to the either the 40K or the 80K most frequent French words. On the source side, we can afford a much larger vocabulary, so we use the 200K most frequent English words. The model treats all other words as unknowns.3 We annotate our training data using the three schemes described in the previous section. The alignment is computed with the Berkeley aligner (Liang et al., 2006) using its default settings. We 3When the French vocabulary has 40K words, there are on average 1.33 unknown words per sentence on the target side of the test set. discard sentence pairs in which the source or the target sentence exceed 100 tokens. 4.2 Training Details Our training procedure and hyperparameter choices are similar to those used by Sutskever et al. (2014). In more details, we train multi-layer deep LSTMs, each of which has 1000 cells, with 1000 dimensional embeddings. Like Sutskever et al. (2014), we reverse the words in the source sentences which has been shown to improve LSTM memory utilization and results in better translations of long sentences. Our hyperparameters can be summarized as follows: (a) the parameters are initialized uniformly in [-0.08, 0.08] for 4-layer models and [-0.06, 0.06] for 6-layer models, (b) SGD has a fixed learning rate of 0.7, (c) we train for 8 epochs (after 5 epochs, we begin to halve the learning rate every 0.5 epoch), (d) the size of the mini-batch is 128, and (e) we rescale the normalized gradient to ensure that its norm does not exceed 5 (Pascanu et al., 2012). We also follow the GPU parallelization scheme proposed in (Sutskever et al., 2014), allowing us to reach a training speed of 5.4K words per second to train a depth-6 model with 200K source and 80K target vocabularies ; whereas Sutskever et al. (2014) achieved 6.3K words per second for a depth-4 models with 80K source and target vocabularies. Training takes about 10-14 days on an 8-GPU machine. 4.3 A note on BLEU scores We report BLEU scores based on both: (a) detokenized translations, i.e., WMT’14 style, to be comparable with results reported on the WMT website4 and (b) tokenized translations, so as to be consistent with previous work (Cho et al., 2014; Bahdanau et al., 2015; Schwenk, 2014; Sutskever et al., 2014; Jean et al., 2015).5 The existing WMT’14 state-of-the-art system (Durrani et al., 2014) achieves a detokenized BLEU score of 35.8 on the newstest2014 test set for English to French language pair (see Table 2). In terms of the tokenized BLEU, its performance is 37.0 points (see Table 1). 4http://matrix.statmt.org/matrix 5The tokenizer.perl and multi-bleu.pl scripts are used to tokenize and score translations. 14 System Vocab Corpus BLEU State of the art in WMT’14 (Durrani et al., 2014) All 36M 37.0 Standard MT + neural components Schwenk (2014) – neural language model All 12M 33.3 Cho et al. (2014)– phrase table neural features All 12M 34.5 Sutskever et al. (2014) – 5 LSTMs, reranking 1000-best lists All 12M 36.5 Existing end-to-end NMT systems Bahdanau et al. (2015) – single gated RNN with search 30K 12M 28.5 Sutskever et al. (2014) – 5 LSTMs 80K 12M 34.8 Jean et al. (2015) – 8 gated RNNs with search + UNK replacement 500K 12M 37.2 Our end-to-end NMT systems Single LSTM with 4 layers 40K 12M 29.5 Single LSTM with 4 layers + PosUnk 40K 12M 31.8 (+2.3) Single LSTM with 6 layers 40K 12M 30.4 Single LSTM with 6 layers + PosUnk 40K 12M 32.7 (+2.3) Ensemble of 8 LSTMs 40K 12M 34.1 Ensemble of 8 LSTMs + PosUnk 40K 12M 36.9 (+2.8) Single LSTM with 6 layers 80K 36M 31.5 Single LSTM with 6 layers + PosUnk 80K 36M 33.1 (+1.6) Ensemble of 8 LSTMs 80K 36M 35.6 Ensemble of 8 LSTMs + PosUnk 80K 36M 37.5 (+1.9) Table 1: Tokenized BLEU on newstest2014 – Translation results of various systems which differ in terms of: (a) the architecture, (b) the size of the vocabulary used, and (c) the training corpus, either using the full WMT’14 corpus of 36M sentence pairs or a subset of it with 12M pairs. We highlight the performance of our best system in bolded text and state the improvements obtained by our technique of handling rare words (namely, the PosUnk model). Notice that, for a given vocabulary size, the more accurate systems achieve a greater improvement from the post-processing step. This is the case because the more accurate models are able to pin-point the origin of an unknown word with greater accuracy, making the post-processing more useful. System BLEU Existing SOTA (Durrani et al., 2014) 35.8 Ensemble of 8 LSTMs + PosUnk 36.6 Table 2: Detokenized BLEU on newstest2014 – translation results of the existing state-of-the-art system and our best system. 4.4 Main Results We compare our systems to others, including the current state-of-the-art MT system (Durrani et al., 2014), recent end-to-end neural systems, as well as phrase-based baselines with neural components. The results shown in Table 1 demonstrate that our unknown word translation technique (in particular, the PosUnk model) significantly improves the translation quality for both the individual (nonensemble) LSTM models and the ensemble models.6 For 40K-word vocabularies, the performance gains are in the range of 2.3-2.8 BLEU points. With larger vocabularies (80K), the performance gains are diminished, but our technique can still provide a nontrivial gains of 1.6-1.9 BLEU points. It is interesting to observe that our approach is more useful for ensemble models as compared to the individual ones. This is because the usefulness of the PosUnk model directly depends on the ability of the NMT to correctly locate, for a given OOV target word, its corresponding word in the source sentence. An ensemble of large models identifies these source words with greater accuracy. This is why for the same vocabulary size, better models obtain a greater performance gain 6For the 40K-vocabulary ensemble, we combine 5 models with 4 layers and 3 models with 6 layers. For the 80Kvocabulary ensemble, we combine 3 models with 4 layers and 5 models with 6 layers. Two of the depth-6 models are regularized with dropout, similar to Zaremba et al. (2015) with the dropout probability set to 0.2. 15 our post-processing step. e Except for the very recent work of Jean et al. (2015) that employs a similar unknown treatment strategy7 as ours, our best result of 37.5 BLEU outperforms all other NMT systems by a arge margin, and more importanly, our system has established a new record on the WMT’14 English to French translation. 5 Analysis We analyze and quantify the improvement obtained by our rare word translation approach and provide a detailed comparison of the different rare word techniques proposed in Section 3. We also examine the effect of depth on the LSTM architectures and demonstrate a strong correlation between perplexities and BLEU scores. We also highlight a few translation examples where our models succeed in correctly translating OOV words, and present several failures. 5.1 Rare Word Analysis To analyze the effect of rare words on translation quality, we follow Sutskever et al. (Sutskever et al., 2014) and sort sentences in newstest2014 by the average inverse frequency of their words. We split the test sentences into groups where the sentences within each group have a comparable number of rare words and evaluate each group independently. We evaluate our systems before and after translating the OOV words and compare with the standard MT systems – we use the best system from the WMT’14 contest (Durrani et al., 2014), and neural MT systems – we use the ensemble systems described in (Sutskever et al., 2014) and Section 4. Rare word translation is challenging for neural machine translation systems as shown in Figure 5. Specifically, the translation quality of our model before applying the postprocessing step is shown by the green curve, and the current best NMT system (Sutskever et al., 2014) is the purple curve. While (Sutskever et al., 2014) produces better translations for sentences with frequent words (the left part of the graph), they are worse than best 7Their unknown replacement method and ours both track the locations of target unknown words and use a word dictionary to post-process the translation. However, the mechanism used to achieve the “tracking” behavior is different. Jean et al. (2015)’s uses the attentional mechanism to track the origins of all target words, not just the unknown ones. In contrast, we only focus on tracking unknown words using unsupervised alignments. Our method can be easily applied to any sequence-to-sequence models since we treat any model as a blackbox and manipulate only at the input and output levels. 0 500 1000 1500 2000 2500 3000 28 30 32 34 36 38 40 42 Sents BLEU SOTA Durrani et al. (37.0) Sutskever et al. (34.8) Ours (35.6) Ours + PosUnk (37.5) Figure 5: Rare word translation – On the x-axis, we order newstest2014 sentences by their average frequency rank and divide the sentences into groups of sentences with a comparable prevalence of rare words. We compute the BLEU score of each group independently. system (red curve) on sentences with many rare words (the right side of the graph). When applying our unknown word translation technique (purple curve), we significantly improve the translation quality of our NMT: for the last group of 500 sentences which have the greatest proportion of OOV words in the test set, we increase the BLEU score of our system by 4.8 BLEU points. Overall, our rare word translation model interpolates between the SOTA system and the system of Sutskever et al. (2014), which allows us to outperform the winning entry of WMT’14 on sentences that consist predominantly of frequent words and approach its performance on sentences with many OOV words. 5.2 Rare Word Models We examine the effect of the different rare word models presented in Section 3, namely: (a) Copyable – which aligns the unknown words on both the input and the target side by learning to copy indices, (b) the Positional All (PosAll) – which predicts the aligned source positions for every target word, and (c) the Positional Unknown (PosUnk) – which predicts the aligned source positions for only the unknown target words.8 It is also interest8In this section and in section 5.3, all models are trained on the unreversed sentences, and we use the following hyperparameters: we initialize the parameters uniformly in [-0.1, 0.1], the learning rate is 1, the maximal gradient norm is 1, with a source vocabulary of 90k words, and a target vocabulary of 40k (see Section 4.2 for more details). While these LSTMs do not achieve the best possible performance, it is still useful to analyze them. 16 NoAlign (5.31) Copyable (5.38) PosAll (5.30, 1.37) PosUnk (5.32) 20 22 24 26 28 30 32 BLEU +0.8 +1.0 +2.4 +2.2 Figure 6: Rare word models – translation performance of 6-layer LSTMs: a model that uses no alignment (NoAlign) and the other rare word models (Copyable, PosAll, PosUnk). For each model, we show results before (left) and after (right) the rare word translation as well as the perplexity (in parentheses). For PosAll, we report the perplexities of predicting the words and the positions. ing to measure the improvement obtained when no alignment information is used during training. As such, we include a baseline model with no alignment knowledge (NoAlign) in which we simply assume that the ith unknown word on the target sentence is aligned to the ith unknown word in the source sentence. From the results in Figure 6, a simple monotone alignment assumption for the NoAlign model yields a modest gain of 0.8 BLEU points. If we train the model to predict the alignment, then the Copyable model offers a slightly better gain of 1.0 BLEU. Note, however, that English and French have similar word order structure, so it would be interesting to experiment with other language pairs, such as English and Chinese, in which the word order is not as monotonic. These harder language pairs potentially imply a smaller gain for the NoAlign model and a larger gain for the Copyable model. We leave it for future work. The positional models (PosAll and PosUnk) improve translation performance by more than 2 BLEU points. This proves that the limitation of the copyable model, which forces it to align each unknown output word with an unknown input word, is considerable. In contrast, the positional models can align the unknown target words with any source word, and as a result, post-processing has a much stronger effect. The PosUnk model achieves better translation results than the PosAll model which suggests that it is easier to train the LSTM Depth 3 (6.01) Depth 4 (5.71) Depth 6 (5.46) 20 22 24 26 28 30 32 BLEU +1.9 +2.0 +2.2 Figure 7: Effect of depths – BLEU scores achieved by PosUnk models of various depths (3, 4, and 6) before and after the rare word translation. Notice that the PosUnk model is more useful on more accurate models. on shorter sequences. 5.3 Other Effects Deep LSTM architecture – We compare PosUnk models trained with different number of layers (3, 4, and 6). We observe that the gain obtained by the PosUnk model increases in tandem with the overall accuracy of the model, which is consistent with the idea that larger models can point to the appropriate source word more accurately. Additionally, we observe that on average, each extra LSTM layer provides roughly 1.0 BLEU point improvement as demonstrated in Figure 7. 5.6 5.8 6 6.2 6.4 6.6 6.8 23 23.5 24 24.5 25 25.5 26 26.5 Perplexity BLEU Figure 8: Perplexity vs. BLEU – we show the correlation by evaluating an LSTM model with 4 layers at various stages of training. Perplexity and BLEU – Lastly, we find it interesting to observe a strong correlation between the perplexity (our training objective) and the translation quality as measured by BLEU. Figure 8 shows the performance of a 4-layer LSTM, in which we compute both perplexity and BLEU scores at different points during training. We find that on average, a reduction of 0.5 perplexity gives us roughly 1.0 BLEU point improvement. 17 Sentences src An additional 2600 operations including orthopedic and cataract surgery will help clear a backlog . trans En outre , unkpos1 op´erations suppl´ementaires , dont la chirurgie unkpos5 et la unkpos6 , permettront de r´esorber l’ arri´er´e . +unk En outre , 2600 op´erations suppl´ementaires , dont la chirurgie orthop´ediques et la cataracte , permettront de r´esorber l’ arri´er´e . tgt 2600 op´erations suppl´ementaires , notamment dans le domaine de la chirurgie orthop´edique et de la cataracte , aideront `a rattraper le retard . src This trader , Richard Usher , left RBS in 2010 and is understand to have be given leave from his current position as European head of forex spot trading at JPMorgan . trans Ce unkpos0 , Richard unkpos0 , a quitt´e unkpos1 en 2010 et a compris qu’ il est autoris´e `a quitter son poste actuel en tant que leader europ´een du march´e des points de vente au unkpos5 . +unk Ce n´egociateur , Richard Usher , a quitt´e RBS en 2010 et a compris qu’ il est autoris´e `a quitter son poste actuel en tant que leader europ´een du march´e des points de vente au JPMorgan . tgt Ce trader , Richard Usher , a quitt´e RBS en 2010 et aurait ´et´e mis suspendu de son poste de responsable europ´een du trading au comptant pour les devises chez JPMorgan src But concerns have grown after Mr Mazanga was quoted as saying Renamo was abandoning the 1992 peace accord . trans Mais les inqui´etudes se sont accrues apr`es que M. unkpos3 a d´eclar´e que la unkpos3 unkpos3 l’ accord de paix de 1992 . +unk Mais les inqui´etudes se sont accrues apr`es que M. Mazanga a d´eclar´e que la Renamo ´etait l’ accord de paix de 1992 . tgt Mais l’ inqui´etude a grandi apr`es que M. Mazanga a d´eclar´e que la Renamo abandonnait l’ accord de paix de 1992 . Table 3: Sample translations – the table shows the source (src) and the translations of our best model before (trans) and after (+unk) unknown word translations. We also show the human translations (tgt) and italicize words that are involved in the unknown word translation process. 5.4 Sample Translations We present three sample translations of our best system (with 37.5 BLEU) in Table 3. In our first example, the model translates all the unknown words correctly: 2600, orthop´ediques, and cataracte. It is interesting to observe that the model can accurately predict an alignment of distances of 5 and 6 words. The second example highlights the fact that our model can translate long sentences reasonably well and that it was able to correctly translate the unknown word for JPMorgan at the very far end of the source sentence. Lastly, our examples also reveal several penalties incurred by our model: (a) incorrect entries in the word dictionary, as with n´egociateur vs. trader in the second example, and (b) incorrect alignment prediction, such as when unkpos3 is incorrectly aligned with the source word was and not with abandoning, which resulted in an incorrect translation in the third sentence. 6 Conclusion We have shown that a simple alignment-based technique can mitigate and even overcome one of the main weaknesses of current NMT systems, which is their inability to translate words that are not in their vocabulary. A key advantage of our technique is the fact that it is applicable to any NMT system and not only to the deep LSTM model of Sutskever et al. (2014). A technique like ours is likely necessary if an NMT system is to achieve state-of-the-art performance on machine translation. We have demonstrated empirically that on the 18 WMT’14 English-French translation task, our technique yields a consistent and substantial improvement of up to 2.8 BLEU points over various NMT systems of different architectures. Most importantly, with 37.5 BLEU points, we have established the first NMT system that outperformed the best MT system on a WMT’14 contest dataset. Acknowledgments We thank members of the Google Brain team for thoughtful discussions and insights. The first author especially thanks Chris Manning and the Stanford NLP group for helpful comments on the early drafts of the paper. Lastly, we thank the annonymous reviewers for their valuable feedback. References Amittai Axelrod, Xiaodong He, and Jianfeng Gao. 2011. Domain adaptation via pseudo in-domain data selection. In EMNLP. D. Bahdanau, K. Cho, and Y. Bengio. 2015. Neural machine translation by jointly learning to align and translate. In ICLR. D. Cer, M. Galley, D. Jurafsky, and C. D. Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In ACL, Demonstration Session. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In EMNLP. Nadir Durrani, Barry Haddow, Philipp Koehn, and Kenneth Heafield. 2014. Edinburgh’s phrase-based machine translation systems for WMT-14. In WMT. Chris Dyer, Jonathan Weese, Hendra Setiawan, Adam Lopez, Ferhan Ture, Vladimir Eidelman, Juri Ganitkevitch, Phil Blunsom, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In ACL, Demonstration Session. A. Graves, G. Wayne, and I. Danihelka. 2014. Neural turing machines. arXiv preprint arXiv:1410.5401. A. Graves. 2013. Generating sequences with recurrent neural networks. In Arxiv preprint arXiv:1308.0850. S´ebastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. On using very large target vocabulary for neural machine translation. In ACL. N. Kalchbrenner and P. Blunsom. 2013. Recurrent continuous translation models. In EMNLP. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In NAACL. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, Demonstration Session. P. Liang, B. Taskar, and D. Klein. 2006. Alignment by agreement. In NAACL. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL. R. Pascanu, T. Mikolov, and Y. Bengio. 2012. On the difficulty of training recurrent neural networks. arXiv preprint arXiv:1211.5063. H. Schwenk. 2014. University le mans. http://www-lium.univ-lemans.fr/ ˜schwenk/cslm_joint_paper/. [Online; accessed 03-September-2014]. I. Sutskever, O. Vinyals, and Q. V. Le. 2014. Sequence to sequence learning with neural networks. In NIPS. Wojciech Zaremba, Ilya Sutskever, and Oriol Vinyals. 2015. Recurrent neural network regularization. In ICLR. 19
2015
2
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 198–207, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Syntax-based Simultaneous Translation through Prediction of Unseen Syntactic Constituents Yusuke Oda Graham Neubig Sakriani Sakti Tomoki Toda Satoshi Nakamura Graduate School of Information Science Nara Institute of Science and Technology Takayamacho, Ikoma, Nara 630-0192, Japan {oda.yusuke.on9, neubig, ssakti, tomoki, s-nakamura}@is.naist.jp Abstract Simultaneous translation is a method to reduce the latency of communication through machine translation (MT) by dividing the input into short segments before performing translation. However, short segments pose problems for syntaxbased translation methods, as it is difficult to generate accurate parse trees for sub-sentential segments. In this paper, we perform the first experiments applying syntax-based SMT to simultaneous translation, and propose two methods to prevent degradations in accuracy: a method to predict unseen syntactic constituents that help generate complete parse trees, and a method that waits for more input when the current utterance is not enough to generate a fluent translation. Experiments on English-Japanese translation show that the proposed methods allow for improvements in accuracy, particularly with regards to word order of the target sentences. 1 Introduction Speech translation is an application of machine translation (MT) that converts utterances from the speaker’s language into the listener’s language. One of the most identifying features of speech translation is the fact that it must be performed in real time while the speaker is speaking, and thus it is necessary to split a constant stream of words into translatable segments before starting the translation process. Traditionally, speech translation assumes that each segment corresponds to a sentence, and thus performs sentence boundary detection before translation (Matusov et al., 2006). However, full sentences can be long, particularly in formal speech such as lectures, and if translation does not start until explicit ends of Figure 1: Simultaneous translation where the source sentence is segmented after “I think” and translated according to (a) the standard method, (b) Grissom II et al. (2014)’s method of final verb prediction, and (c) our method of predicting syntactic constituents. sentences, listeners may be forced to wait a considerable time until receiving the result of translation. For example, when the speaker continues to talk for 10 seconds, listeners must wait at least 10 seconds to obtain the result of translation. This is the major factor limiting simultaneity in traditional speech translation systems. Simultaneous translation (Section 2) avoids this problem by starting to translate before observing the whole sentence, as shown in Figure 1 (a). However, as translation starts before the whole sentence is observed, translation units are often not syntactically or semantically complete, and the performance may suffer accordingly. The deleterious effect of this missing information is less worrying in largely monotonic language pairs (e.g. English-French), but cannot be discounted in syntactically distant language pairs (e.g. EnglishJapanese) that often require long-distance reordering beyond translation units. One way to avoid this problem of missing information is to explicitly predict information needed 198 Figure 2: Process of English-Japanese simultaneous translation with sentence segmentation. to translate the content accurately. An ambitious first step in this direction was recently proposed by Grissom II et al. (2014), who describe a method that predicts sentence-final verbs using reinforcement learning (e.g. Figure 1 (b)). This approach has the potential to greatly decrease the delay in translation from verb-final languages to verbinitial languages (such as German-English), but is also limited to only this particular case. In this paper, we propose a more general method that focuses on a different variety of information: unseen syntactic constituents. This method is motivated by our desire to apply translation models that use source-side parsing, such as tree-to-string (T2S) translation (Huang et al., 2006) or syntactic pre-ordering (Xia and McCord, 2004), which have been shown to greatly improve translation accuracy over syntactically divergent language pairs. However, conventional methods for parsing are not directly applicable to the partial sentences that arise in simultaneous MT. The reason for this, as explained in detail in Section 3, is that parsing methods generally assume that they are given input that forms a complete syntactic phrase. Looking at the example in Figure 1, after the speaker has spoken the words “I think” we have a partial sentence that will only be complete once we observe the following SBAR. Our method attempts to predict exactly this information, as shown in Figure 1 (c), guessing the remaining syntactic constituents that will allow us to acquire a proper parse tree. Specifically the method consists of two parts: First, we propose a method that trains a statistical model to predict future syntactic constituents based on features of the input segment (Section 4). Second, we demonstrate how to apply this syntactic prediction to MT, including the proposal of a heuristic method that examines whether a future constituent has the potential to cause a reordering problem during translation, and wait for more input in these cases (Section 5). Based on the proposed method, we perform experiments in simultaneous translation of EnglishJapanese talks (Section 6). As this is the first work applying T2S translation to simultaneous MT, we first compare T2S to more traditional phrase-based techniques. We find that T2S translation is effective with longer segments, but drops off quickly with shorter segments, justifying the need for techniques to handle translation when full context is not available. We then compare the proposed method of predicting syntactic constituents, and find that it improves translation results, particularly with respect to word ordering in the output sentences. 2 Simultaneous Translation In simultaneous translation, we assume that we are given an incoming stream of words f, which we are expected to translate. As the f is long, we would like to begin translating before we reach the end of the stream. Previous methods to do so can generally be categorized into incremental decoding methods, and sentence segmentation methods. In incremental decoding, each incoming word is fed into the decoder one-by-one, and the decoder updates the search graph with the new words and decides whether it should begin translation. Incremental decoding methods have been proposed for phrase-based (Sankaran et al., 2010; Yarmohammadi et al., 2013; Finch et al., 2014) and hierarchical phrase-based (Siahbani et al., 2014) SMT 199 models.1 Incremental decoding has the advantage of using information about the decoding graph in the choice of translation timing, but also requires significant changes to the internal workings of the decoder, precluding the use of standard decoding tools or techniques. Sentence segmentation methods (Figure 2) provide a simpler alternative by first dividing f into subsequences of 1 or more words [f (1), . . . , f (N)]. These segments are then translated with a traditional decoder into output sequences [e(1), . . . , e(N)], which each are output as soon as translation finishes. Many methods have been proposed to perform segmentation, including the use of prosodic boundaries (F¨ugen et al., 2007; Bangalore et al., 2012), predicting punctuation marks (Rangarajan Sridhar et al., 2013), reordering probabilities of phrases (Fujita et al., 2013), or models to explicitly optimize translation accuracy (Oda et al., 2014). Previous work often assumes that f is a single sentence, and focus on sub-sentential segmentation, an approach we follow in this work. Sentence segmentation methods have the obvious advantage of allowing for translation as soon as a segment is decided. However, the use of the shorter segments also makes it necessary to translate while part of the utterance is still unknown. As a result, segmenting sentences more aggressively often results in a decrease translation accuracy. This is a problem in phrase-based MT, the framework used in the majority of previous research on simultaneous translation. However, it is an even larger problem when performing translation that relies on parsing the input sentence. We describe the problems caused by parsing a segment f (n), and solutions, in the following section. 3 Parsing Incomplete Sentences 3.1 Difficulties in Incomplete Parsing In standard phrase structure parsing, the parser assumes that each input string is a complete sentence, or at least a complete phrase. For example, Figure 3 (a) shows the phrase structure of the complete sentence “this is a pen.” However, in the case of simultaneous translation, each translation unit 1There is also one previous rule-based system that uses syntax in incremental translation, but it is language specific and limited domain (Ryu et al., 2006), and thus difficult to compare with our SMT-based system. It also does not predict unseen constituents, relying only on the observed segment. Figure 3: Phrase structures with surrounding syntactic constituents. is not necessarily segmented in a way that guarantees that the translation unit is a complete sentence, so each translation unit should be treated not as a whole, but as a part of a spoken sentence. As a result, the parser input may be an incomplete sequence of words (e.g. “this is,” “is a”), and a standard parser will generate an incorrect parse as shown in Figures 3(b) and 3(c). The proposed method solves this problem by supplementing unseen syntactic constituents before and after the translation unit. For example, considering parse trees for the complete sentence in Figure 3(a), we see that a noun phrase (NP) can be placed after the translation unit “this is.” If we append the syntactic constituent NP as a “black box” before parsing, we can create a syntactically desirable parse tree as shown in Figure 3(d1) We also can construct another tree as shown in Figure 3(d2) by appending two constituents DT and NN . For the other example “is a,” we can create the parse tree in Figure 3(e1) by appending NP before the unit and NN after the unit, or can create the tree in Figure 3(e2) by appending only NN after the unit. 3.2 Formulation of Incomplete Parsing A typical model for phrase structure parsing is the probabilistic context-free grammar (PCFG). Parsing is performed by finding the parse tree T that 200 maximizes the PCFG probability given a sequence of words w ≡[w1, w2, · · · , wn] as shown by Eq. (2): T ∗ ≡ arg max T Pr(T|w) (1) ≃ arg max T [ ∑ (X→[Y,···])∈T log Pr(X →[Y, · · ·]) + ∑ (X→wi)∈T log Pr(X →wi) ], (2) where Pr(X →[Y, · · ·]) represents the generative probabilities of the sequence of constituents [Y, · · ·] given a parent constituent X, and Pr(X → wi) represents the generative probabilities of each word wi (1 ≤i ≤n) given a parent constituent X. To consider parsing of incomplete sentences with appended syntactic constituents, We define L ≡[L|L|, · · · , L2, L1] as the sequence of preceding syntactic constituents of the translation unit and R ≡[R1, R2, · · · , R|R|] as the sequence of following syntactic constituents of the translation unit. For the example Figure 3(d1), we assume that L = [ ] and R = [ NP ]. We assume that both sequences of syntactic constituents L and R are predicted based on the sequence of words w before the main parsing step. Thus, the whole process of parsing incomplete sentences can be described as the combination of predicting both sequences of syntactic constituents represented by Eq. (3) and (4) and parsing with predicted syntactic constituents represented by Eq. (5): L∗ ≡ arg max L Pr(L|w), (3) R∗ ≡ arg max R Pr(R|w), (4) T ∗ ≡ arg max T Pr(T|L∗, w, R∗). (5) Algorithmically, parsing with predicted syntactic constituents can be achieved by simply treating each syntactic constituent as another word in the input sequence and using a standard parsing algorithm such as the CKY algorithm. In this process, the only difference between syntactic constituents and normal words is the probability, which we define as follows: Pr(X →Y ) ≡ { 1, if Y = X 0, otherwise. (6) It should be noted that here L refers to syntactic constituents that have already been seen in the past. Thus, it is theoretically possible to store past parse trees as history and generate L based on this history, or condition Eq. 3 based on this information. However, deciding which part of trees to use as L is not trivial, and applying this approach requires that we predict L and R using different methods. Thus, in this study, we use the same method to predict both sequences of constituents for simplicity. In the next section, we describe the actual method used to create a predictive model for these strings of syntactic constituents. 4 Predicting Syntactic Constituents In order to define which syntactic constituents should be predicted by our model, we assume that each final parse tree generated by w, L and R must satisfy the following conditions: 1. The parse tree generated by w, L and R must be “complete.” Defining this formally, this means that the root node of the parse tree for the segment must correspond to a node in the parse tree for the original complete sentence. 2. Each parse tree contains only L, w and R as terminal symbols. 3. The number of nodes is the minimum necessary to satisfy these conditions. As shown in the Figure 3, there is ambiguity regarding syntactic constituents to be predicted (e.g. we can choose either [ NP ] or [ DT , NN ] as R for w = [ “this”, “is” ]). These conditions avoid ambiguity of which syntactic constituents should predicted for partial sentences in the training data. Looking at the example, Figures 3(d1) and 3(e1) satisfy these conditions, but 3(d2) and 3(e2) do not. Figure 4 shows the statistics of the lengths of L and R sequences extracted according to these criteria for all substrings of the WSJ datasets 2 to 23 of the Penn Treebank (Marcus et al., 1993), a standard training set for English syntactic parsers. From the figure we can see that lengths of up to 2 constituents cover the majority of cases for both L and R, but a significant number of cases require longer strings. Thus methods that predict a fixed number of constituents are not appropriate here. In Algorithm 1, we show the method we propose to 201 Figure 4: Statistics of numbers of syntactic constituents to be predicted. predict R for constituent sequences of an arbitrary length. Here ++ represents the concatenation of two sequences. First, our method forcibly parses the input sequence w and retrieves a potentially incorrect parse tree T ′, which is used to calculate features for the prediction model. The next syntactic constituent R+ is then predicted using features extracted from w, T ′, and the predicted sequence history R∗. This prediction is repeated recurrently until the end-of-sentence symbol (“nil” in Algorithm 1) is predicted as the next symbol. In this study, we use a multi-label classifier based on linear SVMs (Fan et al., 2008) to predict new syntactic constituents with features shown in Table 1. We treat the input sequence w and predicted syntactic constituents R∗as a concatenated sequence w ++ R∗. For example, if we have w = [ this, is, a ] and R∗= [ NN ], then the word features “3 rightmost 1-grams” will take the values “is,” “a,” and NN . Tags of semi-terminal nodes in T ′ are used as part-of-speech (POS) tags for corresponding words and the POS of each predicted syntactic constituent is simply its tag. “nil” is used when some information is not available. For example, if we have w = [ this, is ] and R∗= [ ] then “3 rightmost 1-grams” will take the values “nil,” “this,” and “is.” Algorithm 1 and Table 1 shows the method used to predict R∗but L∗ can be predicted by performing the prediction process in the reverse order. 5 Tree-to-string SMT with Syntactic Constituents Once we have created a tree from the sequence L∗++ w ++ R∗by performing PCFG parsing with predicted syntactic constituents according to Eqs. (2), (5), and (6), the next step is to use this tree in translation. In this section, we focus specifically Algorithm 1 Prediction algorithm for following constituents R∗ T ′ ←arg max T Pr(T|w) R∗←[ ] loop R+ ←arg max R Pr(R|T ′, R∗) if R+ = nil then return R∗ end if R∗←R∗++[R+] end loop Table 1: Features used in predicting syntactic constituents. Type Feature Words 3 leftmost 1,2-grams in w ++ R∗ 3 rightmost 1,2-grams in w ++ R∗ Left/rightmost pair in w ++ R∗ POS Same as “Words” Parse Tag of the root node Tags of children of the root node Pairs of root and children nodes Length |w| |R∗| on T2S translation, which we use in our experiments, but it is likely that similar methods are applicable to other uses of source-side syntax such as pre-ordering as well. It should be noted that using these trees in T2S translation models is not trivial because each estimated syntactic constituent should be treated as an aggregated entity representing all possibilities of subtrees rooted in such a constituent. Specifically, there are two problems: the possibility of reordering an as-of-yet unseen syntactic constituent into the middle of the translated sentence, and the calculation of language model probabilities considering syntactic constituent tags. With regards to the first problem of reordering, consider the example of English-Japanese translation in Figure 5(b), where a syntactic constituent PP is placed at the end of the English sequence (R∗), but the corresponding entity in the Japanese translation result should be placed in the middle of the sentence. In this case, if we attempt to translate immediately, we will have to omit the as-of-yet unknown PP from our translation and translate it later, resulting in an unnatural word ordering in the 202 (a) (b) Figure 5: Waiting for the next translation unit. target sentence.2 Thus, if any of the syntactic constituents in R are placed anywhere other than the end of the translation result, we can assume that this is a hint that the current segmentation boundary is not appropriate. Based on this intuition, we propose a heuristic method that ignores segmentation boundaries that result in a translation of this type, and instead wait for the next translation unit, helping to avoid problems due to inappropriate segmentation boundaries. Algorithm 2 formally describes this waiting method. The second problem of language model probabilities arises because we are attempting to generate a string of words, some of which are not actual words but tags representing syntactic constituents. Creating a language model that contains probabilities for these tags in the appropriate places is not trivial, so for simplicity, we simply assume that every syntactic constituent tag is an unknown word, and that the output of translation consists of both translated normal words and non-translated tags as shown in Figure 5. We relegate a more complete handling of these tags to future work. 2It is also potentially possible to create a predictive model for the actual content of the PP as done for sentence-final verbs by Grissom II et al. (2014), but the space of potential prepositional phrases is huge, and we leave this non-trivial task for future work. Algorithm 2 Waiting algorithm for T2S SMT w ←[ ] loop w ←w ++ NextSegment() L∗←arg max L Pr(L|w) R∗←arg max R Pr(R|w) T ∗←arg max T Pr(T|L∗, w, R∗) e∗←arg max e Pr(e|T ∗) if elements of R∗are rightmost in e∗then Output(e∗) w ←[ ] end if end loop 6 Experiments 6.1 Experiment Settings We perform 2 types of experiments to evaluate the effectiveness of the proposed methods. 6.1.1 Predicting Syntactic Constituents In the first experiment, we evaluate prediction accuracies of unseen syntactic constituents L and R. To do so, we train a predictive model as described in Section 4 using an English treebank and evaluate its performance. To create training and testing data, we extract all substrings w s.t. |w| ≥2 in the Penn Treebank and calculate the corresponding syntactic constituents L and R by according to the original trees and substring w. We use the 90% of the extracted data for training a classifier and the remaining 10% for testing estimation recall, precision and F-measure. We use the Ckylark parser(Oda et al., 2015) to generate T ′ from w. 6.1.2 Simultaneous Translation Next, we evaluate the performance of T2S simultaneous translation adopting the two proposed methods. We use data of TED talks from the English-Japanese section of WIT3 (Cettolo et al., 2012), and also append dictionary entries and examples in Eijiro3 to the training data to increase the vocabulary of the translation model. The total number of sentences/entries is 2.49M (WIT3, Eijiro), 998 (WIT3), and 468 (WIT3) sentences for training, development, and testing respectively. We use the Stanford Tokenizer4 for English tokenization, KyTea (Neubig et al., 2011) for 3http://eijiro.jp/ 4http://nlp.stanford.edu/software/tokenizer.shtml 203 Japanese tokenization, GIZA++ (Och and Ney, 2003) to construct word alignment, and KenLM (Heafield et al., 2013) to generate a 5-gram target language model. We use the Ckylark parser, which we modified to implement the parsing method of Section 3.2, to generate T ∗from L∗, w and R∗. We use Travatar (Neubig, 2013) to train the T2S translation model used in the proposed method, and also Moses (Koehn et al., 2007) to train phrase-based translation models that serve as a baseline. Each translation model is tuned using MERT (Och, 2003) to maximize BLEU (Papineni et al., 2002). We evaluate translation accuracies by BLEU and also RIBES (Isozaki et al., 2010), a reordering-focused metric which has achieved high correlation with human evaluation on English-Japanese translation tasks. We perform tests using two different sentence segmentation methods. The first is n-words segmentation (Rangarajan Sridhar et al., 2013), a simple heuristic that simply segments the input every n words. This method disregards syntactic and semantic units in the original sentence, allowing us to evaluate the robustness of translation against poor segmentation boundaries. The second method is the state-of-the-art segmentation strategy proposed by Oda et al. (2014), which finds segmentation boundaries that optimize the accuracy of the translation output. We use BLEU+1 (Lin and Och, 2004) as the objective of this segmentation strategy. We evaluate the following baseline and proposed methods: PBMT is a baseline using phrase-based SMT. T2S uses T2S SMT with parse trees generated from only w. T2S-Tag further predicts unseen syntactic constituents according to Section 4. Before evaluation, all constituent tags are simply deleted from the output. T2S-Wait uses T2S-Tag and adds the waiting strategy described in Section 5. We also show PBMT-Sent and T2S-Sent which are full sentence-based PBMT and T2S systems. 6.2 Results 6.2.1 Predicting Syntactic Constituents Table 2 shows the recall, precision, and F-measure of the estimated L and R sequences. The table Table 2: Performance of syntactic constituent prediction. Target P % R % F % L (ordered) 31.93 7.27 11.85 (unordered) 51.21 11.66 19.00 R (ordered) 51.12 33.78 40.68 (unordered) 52.77 34.87 42.00 shows results of two evaluation settings, where the order of generated constituents is considered or not. We can see that in each case recall is lower than the corresponding precision and the performance of L differs between ordered and unordered results. These trends result from the fact that the model generates fewer constituents than exist in the test data. However, this trend is not entirely unexpected because it is not possible to completely accurately guess syntactic constituents from every substring w. For example, parts of the sentence “in the next 18 minutes” can generate the sequence “in the next CD NN ” and “ IN DT JJ 18 minutes,” but the constituents CD in the former case and DT and JJ in the latter case are not necessary in all situations. In contrast, NN and IN will probably be inserted most cases. As a result, the appearance of such ambiguous constituents in the training data is less consistent than that of necessary syntactic constituents, and thus the prediction model avoids generating such ambiguous constituents. 6.2.2 Simultaneous Translation Next, we evaluate the translation results achieved by the proposed method. Figures 6 and 7 show the relationship between the mean number of words in the translation segments and translation accuracy of BLEU and RIBES respectively. Each horizontal axis of these graphs indicates the mean number of words in translation units that are used to generate the actual translation output, and these can be assumed to be proportional to the mean waiting time for listeners. In cases except T2S-Wait, these values are equal to the mean length of translation unit generated by the segmentation strategies, and in the case of T2S-Wait, this value shows the length of the translation units concatenated by the waiting strategy. First looking at the full sentence results (rightmost points in each graph), we can see that T2S greatly outperforms PBMT on full sentences, 204 (a) n-words segmentation (b) optimized segmentation Figure 6: Mean #words and BLEU scores of each method. (a) n-words segmentation (b) optimized segmentation Figure 7: Mean #words and RIBES scores of each method. underlining the importance of considering syntax for this language pair. Turning to simultaneous translation, we first consider the case of n-words segmentation, which will demonstrate robustness of each method to poorly formed translation segments. When we compare PBMT and T2S, we can see that T2S is superior for longer segments, but on shorter segments performance is greatly reduced, dropping below that of PBMT in BLEU at an average of 6 words, and RIBES at an average of 4 words. This trend is reasonable, considering that shorter translation units will result in syntactically inconsistent units and thus incorrect parse trees. Next looking at the results for T2S-Tag, we can see that in the case of the n-words segmentation, it is able to maintain the same translation performance of PBMT, even at the shorter settings. Furthermore, T2S-Wait also maintains the same performance of T2S-Tag in BLEU and achieves much higher performance than any of the other methods in RIBES, particularly with regards to shorter translation units. This result shows that the method of waiting for more input in the face of potential reordering problems is highly effective in maintaining the correct ordering of the output. In the case of the optimized segmentation, all three T2S methods maintain approximately the same performance, consistently outperforming PBMT in RIBES, and crossing in BLEU around 56 words. From this, we can hypothesize that the optimized segmentation strategy learns features that maintain some syntactic consistency, which plays a similar role to the proposed method. However, RIBES scores for T2S-Wait is still generally higher than the other methods, demonstrating that waiting maintains its reordering advantage even in the optimized segmentation case. 7 Conclusion and Future Work In this paper, we proposed the first method to apply SMT using source syntax to simultaneous translation. Especially, we proposed methods to maintain the syntactic consistency of translation units by predicting unseen syntactic constituents, and waiting until more input is available when it is necessary to achieve good translation results. Ex205 periments on an English-Japanese TED talk translation task demonstrate that our methods are more robust to short, inconsistent translation segments. As future work, we are planning to devise more sophisticated methods for language modeling using constituent tags, and ways to incorporate previously translated segments into the estimation process for left-hand constituents. Next, our method to predict additional constituents does not target the grammatically correct translation units for which L = [ ] and R = [ ], although there is still room for improvement in this assumption. In addition, we hope to expand the methods proposed here to a more incremental setting, where both parsing and decoding are performed incrementally, and the information from these processes can be reflected in the decision of segmentation boundaries. Acknowledgement Part of this work was supported by JSPS KAKENHI Grant Number 24240032, and Grant-in-Aid for JSPS Fellows Grant Number 15J10649. References Srinivas Bangalore, Vivek Kumar Rangarajan Sridhar, Prakash Kolan, Ladan Golipour, and Aura Jimenez. 2012. Real-time incremental speech-tospeech translation of dialogs. In Proc. NAACL. Mauro Cettolo, Christian Girardi, and Marcello Federico. 2012. WIT3: Web inventory of transcribed and translated talks. In Proc. EAMT. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, and Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. The Journal of Machine Learning Research. Andrew Finch, Xiaolin Wang, and Eiichiro Sumita. 2014. An exploration of segmentation strategies in stream decoding. In Proc. IWSLT. Christian F¨ugen, Alex Waibel, and Muntsin Kolss. 2007. Simultaneous translation of lectures and speeches. Machine Translation, 21. Tomoki Fujita, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2013. Simple, lexicalized choice of translation timing for simultaneous speech translation. In Proc. Interspeech. Alvin Grissom II, He He, Jordan Boyd-Graber, John Morgan, and Hal Daum´e III. 2014. Dont until the final verb wait: Reinforcement learning for simultaneous machine translation. In Proc. EMNLP. Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable modified Kneser-Ney language model estimation. In Proc. ACL. Liang Huang, Kevin Knight, and Aravind Joshi. 2006. Statistical syntax-directed translation with extended domain of locality. In Proc. AMTA. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evaluation of translation quality for distant language pairs. In Proc. EMNLP. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. ACL. Chin-Yew Lin and Franz Josef Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In Proc. COLING. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The Penn treebank. Computational linguistics, 19(2). Evgeny Matusov, Arne Mauser, and Hermann Ney. 2006. Automatic sentence segmentation and punctuation prediction for spoken language translation. In Proc. IWSLT. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable japanese morphological analysis. In Proc. ACLHLT. Graham Neubig. 2013. Travatar: A forest-to-string machine translation engine based on tree transducers. In Proc. ACL. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proc. ACL. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2014. Optimizing segmentation strategies for simultaneous speech translation. In Proc. ACL. Yusuke Oda, Graham Neubig, Sakriani Sakti, Tomoki Toda, and Satoshi Nakamura. 2015. Ckylark: A more robust PCFG-LA parser. In Proc. NAACLHLT. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: A method for automatic evaluation of machine translation. In Proc. ACL. 206 Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, Andrej Ljolje, and Rathinavelu Chengalvarayan. 2013. Segmentation strategies for streaming speech translation. In Proc. NAACL-HLT. Koichiro Ryu, Shigeki Matsubara, and Yasuyoshi Inagaki. 2006. Simultaneous english-japanese spoken language translation based on incremental dependency parsing and transfer. In Proc. COLING. Baskaran Sankaran, Ajeet Grewal, and Anoop Sarkar. 2010. Incremental decoding for phrase-based statistical machine translation. In Proc. WMT. Maryam Siahbani, Ramtin Mehdizadeh Seraj, Baskaran Sankaran, and Anoop Sarkar. 2014. Incremental translation using hierarchical phrasebased translation system. In Proc. SLT. Fei Xia and Michael McCord. 2004. Improving a statistical MT system with automatically learned rewrite patterns. In Proc. COLING. Mahsa Yarmohammadi, Vivek Kumar Rangarajan Sridhar, Srinivas Bangalore, and Baskaran Sankaran. 2013. Incremental segmentation and decoding strategies for simultaneous translation. In Proc. IJCNLP. 207
2015
20
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 208–218, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Efficient Top-Down BTG Parsing for Machine Translation Preordering Tetsuji Nakagawa Google Japan Inc. [email protected] Abstract We present an efficient incremental topdown parsing method for preordering based on Bracketing Transduction Grammar (BTG). The BTG-based preordering framework (Neubig et al., 2012) can be applied to any language using only parallel text, but has the problem of computational efficiency. Our top-down parsing algorithm allows us to use the early update technique easily for the latent variable structured Perceptron algorithm with beam search, and solves the problem. Experimental results showed that the topdown method is more than 10 times faster than a method using the CYK algorithm. A phrase-based machine translation system with the top-down method had statistically significantly higher BLEU scores for 7 language pairs without relying on supervised syntactic parsers, compared to baseline systems using existing preordering methods. 1 Introduction The difference of the word order between source and target languages is one of major problems in phrase-based statistical machine translation. In order to cope with the issue, many approaches have been studied. Distortion models consider word reordering in decoding time using such as distance (Koehn et al., 2003) and lexical information (Tillman, 2004). Another direction is to use more complex translation models such as hierarchical models (Chiang, 2007). However, these approaches suffer from the long-distance reordering issue and computational complexity. Preordering (reordering-as-preprocessing) (Xia and McCord, 2004; Collins et al., 2005) is another approach for tackling the problem, which modifies the word order of an input sentence in a source language to have the word order in a target language (Figure 1(a)). Various methods for preordering have been studied, and a method based on Bracketing Transduction Grammar (BTG) was proposed by Neubig et al. (2012). It reorders source sentences by handling sentence structures as latent variables. The method can be applied to any language using only parallel text. However, the method has the problem of computational efficiency. In this paper, we propose an efficient incremental top-down BTG parsing method which can be applied to preordering. Model parameters can be learned using latent variable Perceptron with the early update technique (Collins and Roark, 2004), since the parsing method provides an easy way for checking the reachability of each parser state to valid final states. We also try to use forced-decoding instead of word alignment based on Expectation Maximization (EM) algorithms in order to create better training data for preordering. In experiments, preordering using the topdown parsing algorithm was faster and gave higher BLEU scores than BTG-based preordering using the CYK algorithm. Compared to existing preordering methods, our method had better or comparable BLEU scores without using supervised parsers. 2 Previous Work 2.1 Preordering for Machine Translation Many preordering methods which use syntactic parse trees have been proposed, because syntactic information is useful for determining the word order in a target language, and it can be used to restrict the search space against all the possible permutations. Preordering methods using manually created rules on parse trees have been studied (Collins et al., 2005; Xu et al., 2009), but 208 Figure 1: An example of preordering. linguistic knowledge for a language pair is necessary to create such rules. Preordering methods which automatically create reordering rules or utilize statistical classifiers have also been studied (Xia and McCord, 2004; Li et al., 2007; Genzel, 2010; Visweswariah et al., 2010; Yang et al., 2012; Miceli Barone and Attardi, 2013; Lerner and Petrov, 2013; Jehl et al., 2014). These methods rely on source-side parse trees and cannot be applied to languages where no syntactic parsers are available. There are preordering methods that do not need parse trees. They are usually trained only on automatically word-aligned parallel text. It is possible to mine parallel text from the Web (Uszkoreit et al., 2010; Antonova and Misyurev, 2011), and the preordering systems can be trained without manually annotated language resources. Tromble and Eisner (2009) studied preordering based on a Linear Ordering Problem by defining a pairwise preference matrix. Khalilov and Sima’an (2010) proposed a method which swaps adjacent two words using a maximum entropy model. Visweswariah et al. (2011) regarded the preordering problem as a Traveling Salesman Problem (TSP) and applied TSP solvers for obtaining reordered words. These methods do not consider sentence structures. DeNero and Uszkoreit (2011) presented a preordering method which builds a monolingual parsing model and a tree reordering model from parallel text. Neubig et al. (2012) proposed to train a discriminative BTG parser for preordering directly from word-aligned parallel text by handling underlying parse trees with latent variables. This method is explained in detail in the next subsection. These two methods can use sentence structures for designing feature functions to score permutations. Figure 2: Bracketing transduction grammar. 2.2 BTG-based Preordering Neubig et al. (2012) proposed a BTG-based preordering method. Bracketing Transduction Grammar (BTG) (Wu, 1997) is a binary synchronous context-free grammar with only one non-terminal symbol, and has three types of rules (Figure 2): Straight which keeps the order of child nodes, Inverted which reverses the order, and Terminal which generates a terminal symbol.1 BTG can express word reordering. For example, the word reordering in Figure 1(a) can be represented with the BTG parse tree in Figure 1(b).2 Therefore, the task to reorder an input source sentence can be solved as a BTG parsing task to find an appropriate BTG tree. In order to find the best BTG tree among all the possible ones, a score function is defined. Let Φ(m) denote the vector of feature functions for the BTG tree node m, and Λ denote the vector of feature weights. Then, for a given source sentence x, the best BTG tree ˆz and the reordered sentence x′ can be obtained as follows: ˆz = argmax z∈Z(x) ∑ m∈Nodes(z) Λ · Φ(m), (1) x′ = Proj(ˆz), (2) where Z(x) is the set of all the possible BTG trees for x, Nodes(z) is the set of all the nodes in the tree z, and Proj(z) is the function which generates a reordered sentence from the BTG tree z. The method was shown to improve translation performance. However, it has a problem of processing speed. The CYK algorithm, whose computational complexity is O(n3) for a sen1Although Terminal produces a pair of source and target words in the original BTG (Wu, 1997), the target-side words are ignored here because both the input and the output of preordering systems are in the source language. In (Wu, 1997), (DeNero and Uszkoreit, 2011) and (Neubig et al., 2012), Terminal can produce multiple words. Here, we produce only one word. 2There may be more than one BTG tree which represents the same word reordering (e.g., the word reordering C3B2A1 to A1B2C3 has two possible BTG trees), and there are permutations which cannot be represented with BTG (e.g., B2D4A1C3 to A1B2C3D4, which is called the 2413 pattern). 209 Figure 3: Top-down BTG parsing. (0) ⟨[[0, 5)], [], 0⟩ (1) ⟨[[0, 2), [2, 5)], [(2, S)], v1⟩ (2) ⟨[[0, 2), [3, 5)], [(2, S), (3, I)], v2⟩ (3) ⟨[[0, 2)], [(2, S), (3, I), (4, I)], v3⟩ (4) ⟨[], [(2, S), (3, I), (4, I), (1, S)], v4⟩ Table 1: Parser states in top-down parsing. tence of length n, is used to find the best parse tree. Furthermore, due to the use of a complex loss function, the complexity at training time is O(n5) (Neubig et al., 2012). Since the computational cost is prohibitive, some techniques like cube pruning and cube growing have been applied (Neubig et al., 2012; Na and Lee, 2013). In this study, we propose a top-down parsing algorithm in order to achieve fast BTG-based preordering. 3 Preordering with Incremental Top-Down BTG Parsing 3.1 Parsing Algorithm We explain an incremental top-down BTG parsing algorithm using Figure 3, which illustrates how a parse tree is built for the example sentence in Figure 1. At the beginning, a tree (span) which covers all the words in the sentence is considered. Then, a span which covers more than one word is split in each step, and the node type (Straight or Inverted) for the splitting point is determined. The algorithm terminates after (n −1) iterations for a sentence with n words, because there are (n −1) positions which can be split. We consider that the incremental parser has a parser state in each step, and define the state as a triple ⟨P, C, v⟩. P is a stack of unresolved spans. A span denoted by [p, q) covers the words xp · · · xq−1 for an input word sequence x = x0 · · · x|x|−1. C is a list of past parser actions. A parser action denoted by (r, o) represents the action to split a span at the position between xr−1 and xr with the node type o ∈{S, I}, where S and I indicate Straight and Inverted respectively. v is the score of the state, which is the sum of the Input: Sentence x, feature weights Λ, beam width k. Output: BTG parse tree. 1: S0 ←{⟨[[0, |x|)], [], 0⟩} // Initial state. 2: for i := 1, · · · , |x| −1 do 3: S ←{} // Set of the next states. 4: foreach s ∈Si−1 do 5: S ←S ∪τx,Λ(s) // Generate next states. 6: Si ←Topk(S) // Select k-best states. 7: ˆs = argmaxs∈S|x|−1 Score(s) 8: return Tree(ˆs) 9: function τx,Λ(⟨P, C, v⟩) 10: [p, q) ←P.pop() 11: S ←{} 12: for r := p + 1, · · · , q do 13: P ′ ←P 14: if r −p > 1 then 15: P ′.push([p, r)) 16: if q −r > 1 then 17: P ′.push([r, q)) 18: vS ←v + Λ · Φ(x, C, p, q, r, S) 19: vI ←v + Λ · Φ(x, C, p, q, r, I) 20: CS ←C; CS.append((r, S)) 21: CI ←C; CI.append((r, I)) 22: S ←S ∪{⟨P ′, CS, vS⟩, ⟨P ′, CI, vI⟩} 23: return S Figure 4: Top-down BTG parsing with beam search. scores for the nodes constructed so far. Parsing starts with the initial state ⟨[[0, |x|)], [], 0⟩, because there is one span covering all the words at the beginning. In each step, a span is popped from the top of the stack, and a splitting point in the span and its node type are determined. The new spans generated by the split are pushed onto the stack if their lengths are greater than 1, and the action is added to the list. On termination, the parser has the final state ⟨[], [c0, · · · , c|x|−2], v⟩, because the stack is empty and there are (|x| −1) actions in total. The parse tree can be obtained from the list of actions. Table 1 shows the parser state for each step in Figure 3. The top-down parsing method can be used with beam search as shown in Figure 4. τx,Λ(s) is a function which returns the set of all the possible next states for the state s. Topk(S) returns the top k states from S in terms of their scores, Score(s) returns the score of the state s, and Tree(s) returns the BTG parse tree constructed from s. Φ(x, C, p, q, r, o) is the feature vector for the node created by splitting the span [p, q) at r with the node type o, and is explained in Section 3.3. 3.2 Learning Algorithm Model parameters Λ are estimated from training examples. We assume that each training example 210 consists of a sentence x and its word order in a target language y = y0 · · · y|x|−1, where yi is the position of xi in the target language. For example, the example sentence in Figure 1(a) will have y = 0, 1, 4, 3, 2. y can have ambiguities. Multiple words can be reordered to the same position on the target side. The words whose target positions are unknown are indicated by position −1, and we consider such words can appear at any position.3 For example, the word alignment in Figure 5 gives the target side word positions y = −1, 2, 1, 0, 0. Statistical syntactic parsers are usually trained on tree-annotated corpora. However, corpora annotated with BTG parse trees are unavailable, and only the gold standard permutation y is available. Neubig et al. (2012) proposed to train BTG parsers for preordering by regarding BTG trees behind word reordering as latent variables, and we use latent variable Perceptron (Sun et al., 2009) together with beam search. In latent variable Perceptron, among the examples whose latent variables are compatible with a gold standard label, the one with the highest score is picked up as a positive example. Such an approach was used for parsing with multiple correct actions (Goldberg and Elhadad, 2010; Sartorio et al., 2013). Figure 6 describes the training algorithm.4 Φ(x, s) is the feature vector for all the nodes in the partial parse tree at the state s, and τx,Λ,y(s) is the set of all the next states for the state s. The algorithm adopts the early update technique (Collins and Roark, 2004) which terminates incremental parsing if a correct state falls off the beam, and there is no possibility to obtain a correct output. Huang et al. (2012) proposed the violationfixing Perceptron framework which is guaranteed to converge even if inexact search is used, and also showed that early update is a special case of the framework. We define that a parser state is valid if the state can reach a final state whose BTG parse tree is compatible with y. Since this is a latent variable setting in which multiple states can reach correct final states, early update occurs when all the valid states fall off the beam (Ma et al., 2013; Yu et al., 2013). In order to use early update, we need to check the validity of each parser 3In (Neubig et al., 2012), the positions of such words were fixed by heuristics. In this study, the positions are not fixed, and all the possibilities are considered by latent variables. 4Although the simple Perceptron algorithm is used for explanation, we actually used the Passive Aggressive algorithm (Crammer et al., 2006) with the parameter averaging technique (Freund and Schapire, 1999). state. We extend the parser state to the four tuple ⟨P, A, v, w⟩, where w ∈{true, false} is the validity of the state. We remove training examples which cannot be represented with BTG beforehand and set w of the initial state to true. The function V alid(s) in Figure 6 returns the validity of state s. One advantage of the top-down parsing algorithm is that it is easy to track the validity of each state. The validity of a state can be calculated using the following property, and we can implement the function τx,Λ,y(s) by modifying the function τx,Λ(s) in Figure 4. Lemma 1. When a valid state s, which has [p, q) in the top of the stack, transitions to a state s′ by the action (r, o), s′ is also valid if and only if the following condition holds: ∀i ∈{p, · · · , r −1} yi = −1 ∨ ∀i ∈{r, · · · , q −1} yi = −1 ∨ ( o = S ∧ max i=p,··· ,r−1 yi̸=−1 yi ≤ min i=r,··· ,q−1 yi̸=−1 yi ) ∨ ( o = I ∧ max i=r,··· ,q−1 yi̸=−1 yi ≤ min i=p,··· ,r−1 yi̸=−1 yi ) . (3) Proof. Let πi denote the position of xi after reordering by BTG parsing. If Condition (3) does not hold, there are i and j which satisfy πi < πj ∧yi > yj ∧yi ̸= −1 ∧yj ̸= −1, and πi and πj are not compatible with y. Therefore, s′ is valid only if Condition (3) holds. When Condition (3) holds, a valid permutation can be obtained if the spans [p, r) and [r, q) are BTG-parsable. They are BTG-parsable as shown below. Let us assume that y does not have ambiguities. The class of the permutations which can be represented by BTG is known as separable permutations in combinatorics. It can be proven (Bose et al., 1998) that a permutation is a separable permutation if and only if it contains neither the 2413 nor the 3142 patterns. Since s is valid, y is a separable permutation. y does not contain the 2413 nor the 3142 patterns, and any subsequence of y also does not contain the patterns. Thus, [p, r) and [r, q) are separable permutations. The above argument holds even if y has ambiguities (duplicated positions or unaligned words). In such a case, we can always make a word order y′ which specializes y and has no ambiguities (e.g., y′ = 2, 1.0, 0.0, 0.1, 1.1 for y = −1, 1, 0, 0, 1), because s is valid, and there is at least one BTG parse tree which licenses y. Any subsequence in 211 Figure 5: An example of word reordering with ambiguities. y′ is a separable permutation, and [p, r) and [r, q) are separable permutations. Therefore, s′ is valid if Condition (3) holds. For dependency parsing and constituent parsing, incremental bottom-up parsing methods have been studied (Yamada and Matsumoto, 2003; Nivre, 2004; Goldberg and Elhadad, 2010; Sagae and Lavie, 2005). Our top-down approach is contrastive to the bottom-up approaches. In the bottom-up approaches, spans which cover individual words are considered at the beginning, then they are merged into larger spans in each step, and a span which covers all the words is obtained at the end. In the top-down approach, a span which covers all the words is considered at the beginning, then spans are split into smaller spans in each step, and spans which cover individual words are obtained at the end. The top-down BTG parsing method has the advantage that the validity of parser states can be easily tracked. The computational complexity of the top-down parsing algorithm is O(kn2) for sentence length n and beam width k, because in Line 5 of Figure 4, which is repeated at most k(n −1) times, at most 2(n −1) parser states are generated, and their scores are calculated. The learning algorithm uses the same decoding algorithm as in the parsing phase, and has the same time complexity. Note that the validity of a parser state can be calculated in O(1) by pre-calculating mini=p,··· ,r∧yi̸=−1 yi, maxi=p,··· ,r∧yi̸=−1 yi, mini=r,··· ,q−1∧yi̸=−1 yi, and maxi=r,··· ,q−1∧yi̸=−1 yi for all r for the span [p, q) when it is popped from the stack. 3.3 Features We assume that each word xi in a sentence has three attributes: word surface form xw i , part-ofspeech (POS) tag xp i and word class xc i (Section 4.1 explains how xp i and xc i are obtained). Table 2 lists the features generated for the node which is created by splitting the span [p, q) with the action (r, o). o’ is the node type of the parent node, d ∈{left, right} indicates whether this node is the left-hand-side or the right-hand-side child of the parent node, and Balance(p, q, r) reInput: Training data {⟨xl, yl⟩}L−1 l=0 , number of iterations T, beam width k. Output: Feature weights Λ. 1: Λ ←0 2: for t := 0, · · · , T −1 do 3: for l := 0, · · · , L −1 do 4: S0 ←{⟨[[0, |xl|)], [], 0, true⟩} 5: for i := 1, · · · , |xl| −1 do 6: S ←{} 7: foreach s ∈Si−1 do 8: S ←S ∪τxl,Λ,yl(s) 9: Si ←Topk(S) 10: ˆs ←argmaxs∈S Score(s) 11: s∗←argmaxs∈S∧V alid(s) Score(s) 12: if s∗/∈Si then 13: break // Early update. 14: if ˆs ̸= s∗then 15: Λ ←Λ + Φ(xl, s∗) −Φ(xl, ˆs) 16: return Λ Figure 6: A training algorithm for latent variable Perceptron with beam search. turns a value among {‘<’, ‘=’, ‘>’} according to the relation of the lengths of [p, r) and [r, q). The baseline feature templates are those used by Neubig et al. (2012), and the additional feature templates are extended features that we introduce in this study. The top-down parser is fast, and allows us to use a larger number of features. In order to make the feature generation efficient, the attributes of all the words are converted to their 64-bit hash values beforehand, and concatenating the attributes is executed not as string manipulation but as faster integer calculation to generate a hash value by merging two hash values. The hash values are used as feature names. Therefore, when accessing feature weights stored in a hash table using the feature names as keys, the keys can be used as their hash values. This technique is different from the hashing trick (Ganchev and Dredze, 2008) which directly uses hash values as indices, and no noticeable differences in accuracy were observed by using this technique. 3.4 Training Data for Preordering As described in Section 3.2, each training example has y which represents correct word positions after reordering. However, only word alignment data is generally available, and we need to convert it to y. Let Ai denote the set of indices of the targetside words which are aligned to the source-side word xi. We define an order relation between two words: xi ≤xj ⇔ ∀a ∈Ai \ Aj, ∀b ∈Aj a ≤b ∧ ∀a ∈Ai, ∀b ∈Aj \ Ai a ≤b. (4) 212 Baseline Feature Template o(q −p), oBalance(p, q, r), oxw p−1, oxw p , oxw r−1, oxw r , oxw q−1, oxw q , oxw p xw q−1, oxw r−1xw r , oxp p−1, oxp p, oxp r−1, oxp r, oxp q−1, oxp q, oxp pxp q−1, oxp r−1xp r, oxc p−1, oxc p, oxc r−1, oxc r, oxc q−1, oxc q, oxc pxc q−1, oxc r−1xc r. Additional Feature Template o min(r −p, 5) min(q −r, 5), oo′, oo′d, oxw p−1xw p , oxw p xw r−1, oxw p xw r , oxw r−1xw q−1, oxw r xw q−1, oxw q−1xw q , oxw r−2xw r−1xw r , oxw p xw r−1xw r , oxw r−1xw r xw q−1, oxw r−1xw r xw r+1, oxw p xw r−1xw r xw q−1, oo′dxw p , oo′dxw r−1, oo′dxw r , oo′dxw q−1, oo′dxw p xw q−1, oxp p−1xp p, oxp pxp r−1, oxp pxp r, oxp r−1xp q−1, oxp rxp q−1, oxp q−1xp q, oxp r−2xp r−1xp r, oxp pxp r−1xp r, oxp r−1xp rxp q−1, oxp r−1xp rxp r+1, oxp pxp r−1xp rxp q−1, oo′dxp p, oo′dxp r−1, oo′dxp r, oo′dxp q−1, oo′dxp pxp q−1, oxc p−1xc p, oxc pxc r−1, oxc pxc r, oxc r−1xc q−1, oxc rxc q−1, oxc q−1xc q, oxc r−2xc r−1xc r, oxc pxc r−1xc r, oxc r−1xc rxc q−1, oxc r−1xc rxc r+1, oxc pxc r−1xc rxc q−1, oo′dxc p, oo′dxc r−1, oo′dxc r, oo′dxc q−1, oo′dxc pxc q−1. Table 2: Feature templates. Then, we sort x using the order relation and assign the position of xi in the sorted result to yi. If there are two words xi and xj in x which satisfy neither xi ≤xj nor xj ≤xi (that is, x does not make a totally ordered set with the order relation), then x cannot be sorted, and the example is removed from the training data. −1 is assigned to the words which do not have aligned target words. Two words xi and xj are regarded to have the same position if xi ≤xj and xj ≤xi. The quality of training data is important to make accurate preordering systems, but automatically word-aligned data by EM algorithms tend to have many wrong alignments. We use forceddecoding in order to make training data for preordering. Given a parallel sentence pair and a phrase table, forced-decoding tries to translate the source sentence to the target sentence, and produces phrase alignments. We train the parameters for forced-decoding using the same parallel data used for training the final translation system. Infrequent phrase translations are pruned when the phrase table is created, and forced-decoding does not always succeed for the parallel sentences in the training data. Forced-decoding tends to succeed for shorter sentences, and the phrase-alignment data obtained by forced-decoding is biased to contain more shorter sentences. Therefore, we apply the following processing for the output of forceddecoding to make training data for preordering: 1. Remove sentences which contain less than 3 or more than 50 words. 2. Remove sentences which contain less than 3 phrase alignments. 3. Remove sentences if they contain word 5grams which appear in other sentences in order to drop boilerplates. 4. Lastly, randomly resample sentences from the pool of filtered sentences to make the distribution of the sentence lengths follow a normal distribution with the mean of 20 and the standard deviation of 8. The parameters were determined from randomly sampled sentences from the Web. 4 Experiments 4.1 Experimental Settings We conduct experiments for 12 language pairs: Dutch (nl)-English (en), en-nl, en-French (fr), enJapanese (ja), en-Spanish (es), fr-en, Hindi (hi)-en, ja-en, Korean (ko)-en, Turkish (tr)-en, Urdu (ur)en and Welsh (cy)-en. We use a phrase-based statistical machine translation system which is similar to (Och and Ney, 2004). The decoder adopts the regular distance distortion model, and also incorporates a maximum entropy based lexicalized phrase reordering model (Zens and Ney, 2006). The distortion limit is set to 5 words. Word alignments are learned using 3 iterations of IBM Model-1 (Brown et al., 1993) and 3 iterations of the HMM alignment model (Vogel et al., 1996). Lattice-based minimum error rate training (MERT) (Macherey et al., 2008) is applied to optimize feature weights. 5gram language models trained on sentences collected from various sources are used. The translation system is trained with parallel sentences automatically collected from the Web. The parallel data for each language pair consists of around 400 million source and target words. In order to make the development data for MERT and test data (3,000 and 5,000 sentences respectively for each language), we created parallel sentences by randomly collecting English sentences from the Web, and translating them by humans into each language. As an evaluation metric for translation quality, BLEU (Papineni et al., 2002) is used. As intrinsic evaluation metrics for preordering, Fuzzy Reordering Score (FRS) (Talbot et al., 2011) and Kendall’s τ (Kendall, 1938; Birch et al., 2010; Isozaki et al., 2010) are used. Let ρi denote the position in the input sentence of the (i+1)-th token in a preordered word sequence excluding unaligned words in the gold-standard evaluation data. For 213 en-ja ja-en Training Preordering FRS τ Training Preordering FRS τ (min.) (sent./sec.) (min.) (sent./sec.) Top-Down (EM-100k) 63 87.8 77.83 87.78 81 178.4 74.60 83.78 Top-Down (Basic Feat.) (EM-100k) 9 475.1 75.25 87.26 9 939.0 73.56 83.66 Lader (EM-100k) 1562 4.3 75.41 86.85 2087 12.3 74.89 82.15 Table 3: Speed and accuracy of preordering. en-ja ja-en FRS τ BLEU FRS τ BLEU Top-Down (Manual-8k) 81.57 90.44 18.13 79.26 86.47 14.26 (EM-10k) 74.79 85.87 17.07 72.51 82.65 14.55 (EM-100k) 77.83 87.78 17.66 74.60 83.78 14.84 (Forced-10k) 76.10 87.45 16.98 75.36 83.96 14.78 (Forced-100k) 78.76 89.22 17.88 76.58 85.25 15.54 Lader (EM-100k) 75.41 86.85 17.40 74.89 82.15 14.59 No-Preordering 46.17 65.07 13.80 59.35 65.30 10.31 Manual-Rules 80.59 90.30 18.68 73.65 81.72 14.02 Auto-Rules 64.13 84.17 16.80 60.60 75.49 12.59 Classifier 80.89 90.61 18.53 74.24 82.83 13.90 Table 4: Performance of preordering for various training data. Bold BLEU scores indicate no statistically significant difference at p < 0.05 from the best system (Koehn, 2004). example, the preordering result “New York I to went” for the gold-standard data in Figure 5 has ρ = 3, 4, 2, 1. Then FRS and τ are calculated as follows: FRS = B |ρ| + 1, (5) B = |ρ|−2 ∑ i=0 δ(yρi=yρi+1 ∨yρi+1=yρi+1) + δ(yρ0=0) + δ(yρ|ρ|−1= max i yi), (6) τ = ∑|ρ|−2 i=0 ∑|ρ|−1 j=i+1 δ(yρi ≤yρj) 1 2|ρ|(|ρ| −1) , (7) where δ(X) is the Kronecker’s delta function which returns 1 if X is true or 0 otherwise. These scores are calculated for each sentence, and are averaged over all sentences in test data. As above, FRS can be calculated as the precision of word bigrams (B is the number of the word bigrams which exist both in the system output and the gold standard data). This formulation is equivalent to the original formulation based on chunk fragmentation by Talbot et al. (2011). Equation (6) takes into account the positions of the beginning and the ending words (Neubig et al., 2012). Kendall’s τ is equivalent to the (normalized) crossing alignment link score used by Genzel (2010). We prepared three types of training data for learning model parameters of BTG-based preordering: Manual-8k Manually word-aligned 8,000 sentence pairs. EM-10k, EM-100k These are the data obtained with the EM-based word alignment learning. From the word alignment result for phrase translation extraction described above, 10,000 and 100,000 sentence pairs were randomly sampled. Before the sampling, the data filtering procedure 1 and 3 in Section 3.4 were applied, and also sentences were removed if more than half of source words do not have aligned target words. Word alignment was obtained by symmetrizing source-to-target and target-tosource word alignment with the INTERSECTION heuristic.5 Forced-10k, Forced-100k These are 10,000 and 100,000 word-aligned sentence pairs obtained with forced-decoding as described in Section 3.4. As test data for intrinsic evaluation of preordering, we manually word-aligned 2,000 sentence pairs for en-ja and ja-en. Several preordering systems were prepared in order to compare the following six systems: No-Preordering This is a system without preordering. Manual-Rules This system uses the preordering method based on manually created rules (Xu 5In our preliminary experiments, the UNION and GROWDIAG-FINAL heuristics were also applied to generate the training data for preordering, but INTERSECTION performed the best. 214 NoManualAutoClassifier Lader Top-Down Top-Down Preordering Rules Rules (EM-100k) (EM-100k) (Forced-100k) nl-en 34.01 34.24 35.42 33.83 35.49 35.51 en-nl 25.33 25.59 25.99 25.30 25.82 25.66 en-fr 25.86 26.39 26.35 26.50 26.75 26.81 en-ja 13.80 18.68 16.80 18.53 17.40 17.66 17.88 en-es 29.50 29.63 30.09 29.70 30.26 30.24 fr-en 32.33 32.09 32.28 32.43 33.00 32.99 hi-en 19.86 24.24 24.98 24.97 ja-en 10.31 14.02 12.59 13.90 14.59 14.84 15.54 ko-en 14.13 15.86 19.46 18.65 19.67 19.88 tr-en 18.26 22.80 23.91 24.18 ur-en 14.48 16.62 17.65 18.32 cy-en 41.68 41.79 41.95 41.86 Table 5: BLEU score comparison. Distortion NoManualAutoClassifier Lader Top-Down Top-Down Limit Preordering Rules Rules (EM-100k) (EM-100k) (Forced-100k) en-ja 5 13.80 18.68 16.80 18.53 17.40 17.66 17.88 en-ja 0 11.99 18.34 16.87 18.31 16.95 17.36 17.88 ja-en 5 10.31 14.02 12.59 13.90 14.59 14.84 15.54 ja-en 0 10.03 12.43 11.33 13.09 14.38 14.72 15.34 Table 6: BLEU scores for different distortion limits. et al., 2009). We made 43 precedence rules for en-ja, and 24 for ja-en. Auto-Rules This system uses the rule-based preordering method which automatically learns the rules from word-aligned data using the Variant 1 learning algorithm described in (Genzel, 2010). 27 to 36 rules were automatically learned for each language pair. Classifier This system uses the preordering method based on statistical classifiers (Lerner and Petrov, 2013), and the 2-step algorithm was implemented. Lader This system uses Latent Derivation Reorderer (Neubig et al., 2012), which is a BTG-based preordering system using the CYK algorithm.6 The basic feature templates in Table 2 are used as features. Top-Down This system uses the preordering system described in Section 3. Among the six systems, Manual-Rules, AutoRules and Classifier need dependency parsers for source languages. A dependency parser based on the shift-reduce algorithm with beam search (Zhang and Nivre, 2011) is used. The dependency parser and all the preordering systems need POS taggers. A supervised POS tagger based on conditional random fields (Lafferty et al., 2001) trained with manually POS annotated data is used for nl, en, fr, ja and ko. For other languages, we use a POS tagger based on POS projection (T¨ackstr¨om 6lader 0.1.4. http://www.phontron.com/lader/ et al., 2013) which does not need POS annotated data. Word classes in Table 2 are obtained by using Brown clusters (Koo et al., 2008) (the number of classes is set to 256). For both Lader and TopDown, the beam width is set to 20, and the number of training iterations of online learning is set to 20. The CPU time shown in this paper is measured using Intel Xeon 3.20GHz with 32GB RAM. 4.2 Results 4.2.1 Training and Preordering Speed Table 3 shows the training time and preordering speed together with the intrinsic evaluation metrics. In this experiment, both Top-Down and Lader were trained using the EM-100k data. Compared to Lader, Top-Down was faster: more than 20 times in training, and more than 10 times in preordering. Top-down had higher preordering accuracy in FRS and τ for en-ja. Although Lader uses sophisticated loss functions, Top-Down uses a larger number of features. Top-Down (Basic feats.) is the top-down method using only the basic feature templates in Table 2. It was much faster but less accurate than Top-Down using the additional features. TopDown (Basic feats.) and Lader use exactly the same features. However, there are differences in the two systems, and they had different accuracies. Top-Down uses the beam search-based top-down method for parsing and the Passive-Aggressive algorithm for parameter estimation, and Lader uses the CYK algorithm with cube pruning and an on215 line SVM algorithm. Especially, Lader optimizes FRS in the default setting, and it may be the reason that Lader had higher FRS. 4.2.2 Performance of Preordering for Various Training Data Table 4 shows the preordering accuracy and BLEU scores when Top-Down was trained with various data. The best BLEU score for Top-Down was obtained by using manually annotated data for enja and 100k forced-decoding data for ja-en. The performance was improved by increasing the data size. 4.2.3 End-to-End Evaluation for Various Language Pairs Table 5 shows the BLEU score of each system for 12 language pairs. Some blank fields mean that the results are unavailable due to the lack of rules or dependency parsers. For all the language pairs, Top-Down had higher BLEU scores than Lader. For ja-en and ur-en, using Forced-100k instead of EM-100k for Top-Down improved the BLEU scores by more than 0.6, but it did not always improved. Manual-Rules performed the best for en-ja, but it needs manually created rules and is difficult to be applied to many language pairs. AutoRules and Classifier had higher scores than NoPreordering except for fr-en, but cannot be applied to the languages with no available dependency parsers. Top-Down (Forced-100k) can be applied to any language, and had statistically significantly better BLEU scores than No-Preordering, ManualRules, Auto-Rules, Classifier and Lader for 7 language pairs (en-fr, fr-en, hi-en, ja-en, ko-en, tr-en and ur-en), and similar performance for other language pairs except for en-ja, without dependency parsers trained with manually annotated data. In all the experiments so far, the decoder was allowed to reorder even after preordering was carried out. In order to see the performance without reordering after preordering, we conducted experiments by setting the distortion limit to 0. Table 6 shows the results. The effect of the distortion limits varies for language pairs and preordering methods. The BLEU scores of Top-Down were not affected largely even when relying only on preordering. 5 Conclusion In this paper, we proposed a top-down BTG parsing method for preordering. The method incrementally builds parse trees by splitting larger spans into smaller ones. The method provides an easy way to check the validity of each parser state, which allows us to use early update for latent variable Perceptron with beam search. In the experiments, it was shown that the top-down parsing method is more than 10 times faster than a CYKbased method. The top-down method had better BLEU scores for 7 language pairs without relying on supervised syntactic parsers compared to other preordering methods. Future work includes developing a bottom-up BTG parser with latent variables, and comparing the results to the top-down parser. References Alexandra Antonova and Alexey Misyurev. 2011. Building a Web-Based Parallel Corpus and Filtering Out Machine-Translated Text. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web, pages 136–144. Alexandra Birch, Miles Osborne, and Phil Blunsom. 2010. Metrics for MT Evaluation: Evaluating Reordering. Machine Translation, 24(1):15–26. Prosenjit Bose, Jonathan F. Buss, and Anna Lubiw. 1998. Pattern matching for permutations. Information Processing Letters, 65(5):277–283. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The Mathematics of Statistical Machine Translation: Parameter Estimation. Computational Linguistics, 19(2):263–311. David Chiang. 2007. Hierarchical Phrase-Based Translation. Computational Linguistics, 33(2):201– 228. Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics, pages 111–118. Michael Collins, Philipp Koehn, and Ivona Kucerova. 2005. Clause Restructuring for Statistical Machine Translation. In Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics, pages 531–540. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online Passive-Aggressive Algorithms. Journal of Machine Learning Research, 7:551–585. John DeNero and Jakob Uszkoreit. 2011. Inducing Sentence Structure from Parallel Corpora for Reordering. In Proceedings of the 2011 Conference on 216 Empirical Methods in Natural Language Processing, pages 193–203. Yoav Freund and Robert E. Schapire. 1999. Large Margin Classification Using the Perceptron Algorithm. Machine Learning, 37(3):277–296. Kuzman Ganchev and Mark Dredze. 2008. Small Statistical Models by Random Feature Mixing. In Proceedings of the ACL-08: HLT Workshop on Mobile Language Processing, pages 19–20. Dmitriy Genzel. 2010. Automatically Learning Source-side Reordering Rules for Large Scale Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 376–384. Yoav Goldberg and Michael Elhadad. 2010. An Efficient Algorithm for Easy-first Non-directional Dependency Parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 742–750. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured Perceptron with Inexact Search. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 142–151. Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic Evaluation of Translation Quality for Distant Language Pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944–952. Laura Jehl, Adri`a de Gispert, Mark Hopkins, and Bill Byrne. 2014. Source-side Preordering for Translation using Logistic Regression and Depthfirst Branch-and-Bound Search. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics, pages 239–248. Maurice G. Kendall. 1938. A New Measure of Rank Correlation. Biometrika, 30(1/2):81–93. Maxim Khalilov and Khalil Sima’an. 2010. Source reordering using MaxEnt classifiers and supertags. In Proceedings of the 14th Annual Conference of the European Association for Machine Translation. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, pages 48–54. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388–395. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 595–603. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th International Conference on Machine Learning, pages 282– 289. Uri Lerner and Slav Petrov. 2013. Source-Side Classifier Preordering for Machine Translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 513– 523. Chi-Ho Li, Minghui Li, Dongdong Zhang, Mu Li, Ming Zhou, and Yi Guan. 2007. A Probabilistic Approach to Syntax-based Reordering for Statistical Machine Translation. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 720–727. Ji Ma, Jingbo Zhu, Tong Xiao, and Nan Yang. 2013. Easy-First POS Tagging and Dependency Parsing with Beam Search. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 110– 114. Wolfgang Macherey, Franz Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based Minimum Error Rate Training for Statistical Machine Translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 725–734. Valerio Antonio Miceli Barone and Giuseppe Attardi. 2013. Pre-Reordering for Machine Translation Using Transition-Based Walks on Dependency Parse Trees. In Proceedings of the 8th Workshop on Statistical Machine Translation, pages 164–169. Hwidong Na and Jong-Hyeok Lee. 2013. A Discriminative Reordering Parser for IWSLT 2013. In Proceedings of the 10th International Workshop for Spoken Language Translation, pages 83–86. Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a Discriminative Parser to Optimize Machine Translation Reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 843–853. Joakim Nivre. 2004. Incrementality in Deterministic Dependency Parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together, pages 50–57. Franz Josef Och and Hermann Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 30(4):417– 449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 311–318. Kenji Sagae and Alon Lavie. 2005. A Classifier-Based Parser with Linear Run-Time Complexity. In Proceedings of the 9th International Workshop on Parsing Technology, pages 125–132. 217 Francesco Sartorio, Giorgio Satta, and Joakim Nivre. 2013. A Transition-Based Dependency Parser Using a Dynamic Parsing Strategy. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 135–144. Xu Sun, Takuya Matsuzaki, Daisuke Okanohara, and Jun’ichi Tsujii. 2009. Latent Variable Perceptron Algorithm for Structured Classification. In Proceedings of the 21st International Joint Conference on Artificial Intelligence, pages 1236–1242. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and Type Constraints for Cross-Lingual Part-of-Speech Tagging. Transactions of the Association of Computational Linguistics, 1:1–12. David Talbot, Hideto Kazawa, Hiroshi Ichikawa, Jason Katz-Brown, Masakazu Seno, and Franz J. Och. 2011. A Lightweight Evaluation Framework for Machine Translation Reordering. In Proceedings of the 6th Workshop on Statistical Machine Translation, pages 12–21. Christoph Tillman. 2004. A Unigram Orientation Model for Statistical Machine Translation. In Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (Short Papers), pages 101–104. Roy Tromble and Jason Eisner. 2009. Learning Linear Ordering Problems for Better Translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1007– 1016. Jakob Uszkoreit, Jay M. Ponte, Ashok C. Popat, and Moshe Dubiner. 2010. Large Scale Parallel Document Mining for Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1101–1109. Karthik Visweswariah, Jiri Navratil, Jeffrey Sorensen, Vijil Chenthamarakshan, and Nandakishore Kambhatla. 2010. Syntax Based Reordering with Automatically Derived Rules for Improved Statistical Machine Translation. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1119–1127. Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A Word Reordering Model for Improved Machine Translation. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 486–496. Stephan Vogel, Hermann Ney, and Christoph Tillmann. 1996. HMM-based Word Alignment in Statistical Translation. In Proceedings of the 16th Conference on Computational Linguistics, pages 836–841. Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3):377–403. Fei Xia and Michael McCord. 2004. Improving a Statistical MT System with Automatically Learned Rewrite Patterns. In Proceedings of the 20th International Conference on Computational Linguistics, pages 508–514. Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 245–253. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical Dependency Analysis with Support Vector Machines. In Proceedings of the 8th International Workshop on Parsing Technologies, pages 195–206. Nan Yang, Mu Li, Dongdong Zhang, and Nenghai Yu. 2012. A Ranking-based Approach to Word Reordering for Statistical Machine Translation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 912–920. Heng Yu, Liang Huang, Haitao Mi, and Kai Zhao. 2013. Max-Violation Perceptron and Forced Decoding for Scalable MT Training. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1112–1123. Richard Zens and Hermann Ney. 2006. Discriminative Reordering Models for Statistical Machine Translation. In Proceedings on the Workshop on Statistical Machine Translation, pages 55–63. Yue Zhang and Joakim Nivre. 2011. Transition-based Dependency Parsing with Rich Non-local Features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Short Papers, pages 188–193. 218
2015
21
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 219–228, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Online Multitask Learning for Machine Translation Quality Estimation Jos´e G. C. de Souza(1,2), Matteo Negri(1), Elisa Ricci(1), Marco Turchi(1) (1) FBK - Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy (2) University of Trento, Italy {desouza,negri,eliricci,turchi}@fbk.eu Abstract We present a method for predicting machine translation output quality geared to the needs of computer-assisted translation. These include the capability to: i) continuously learn and self-adapt to a stream of data coming from multiple translation jobs, ii) react to data diversity by exploiting human feedback, and iii) leverage data similarity by learning and transferring knowledge across domains. To achieve these goals, we combine two supervised machine learning paradigms, online and multitask learning, adapting and unifying them in a single framework. We show the effectiveness of our approach in a regression task (HTER prediction), in which online multitask learning outperforms the competitive online single-task and pooling methods used for comparison. This indicates the feasibility of integrating in a CAT tool a single QE component capable to simultaneously serve (and continuously learn from) multiple translation jobs involving different domains and users. 1 Introduction Even if not perfect, machine translation (MT) is now getting reliable enough to support and speedup human translation. Thanks to this progress, the work of professional translators is gradually shifting from full translation from scratch to MT post-editing. Advanced computer-assisted translation (CAT) tools1 provide a natural framework for this activity by proposing, for each segment in a source document, one or more suggestions obtained either from a translation memory (TM) or from an MT engine. In both cases, accurate mechanisms to indicate the reliability of a suggestion 1See for instance the open source MateCat tool (Federico et al., 2014). are extremely useful to let the user decide whether to post-edit a given suggestion or ignore it and translate the source segment from scratch. However, while scoring TM matches relies on standard methods based on fuzzy matching, predicting the quality of MT suggestions at run-time and without references is still an open issue. This is the goal of MT quality estimation (QE), which aims to predict the quality of an automatic translation as a function of the estimated number of editing operations or the time required for manual correction (Specia et al., 2009; Soricut and Echihabi, 2010; Bach et al., 2011; Mehdad et al., 2012). So far, QE has been mainly approached in controlled settings where homogeneous training and test data is used to learn and evaluate static predictors. Cast in this way, however, it does not fully reflect (nor exploit) the working conditions posed by the CAT framework, in which: 1. The QE module is exposed to a continuous stream of data. The amount of such data and the tight schedule of multiple, simultaneous translation jobs prevents from (theoretically feasible but impractical) complete re-training procedures in a batch fashion and advocate for continuous learning methods. 2. The input data can be diverse in nature. Continuous learning should be sensitive to such differences, in a way that each translation job and user is supported by a reactive model that is robust to variable working conditions. 3. The input data can show similarities with previous observations. Continuous learning should leverage such similarities, so that QE can capitalize from all the previously processed segments even if they come from different domains, genres or users. While previous QE research disregarded these challenges or addressed them in isolation, our 219 work tackles them in a single unifying framework based on the combination of two paradigms: online and multitask learning. The former provides continuous learning capabilities that allow the QE model to be robust and self-adapt to a stream of potentially diverse data. The latter provides the model with the capability to exploit the similarities between data coming from different sources. Along this direction our contributions are: • The first application of online multitask learning to QE, geared to the challenges posed by CAT technology. In this framework, our models are trained to predict MT quality in terms of HTER (Snover et al., 2006).2 • The extension of current online multitask learning methods to regression. Prior works in the machine learning field applied this paradigm to classification problems, but its use for HTER estimation requires real-valued predictions. To this aim, we propose a new regression algorithm that, at the same time, handles positive and negative transfer and performs online weight updates. • A comparison between online multitask and alternative, state-of-the-art online learning strategies. Our experiments, carried out in a realistic scenario involving a stream of data from four domains, lead to consistent results that prove the effectiveness of our approach. 2 Related Work In recent years, sentence-level QE has been mainly investigated in controlled evaluation scenarios such as those proposed by the shared tasks organized within the WMT workshop on SMT (Callison-Burch et al., 2012; Bojar et al., 2013; Bojar et al., 2014). In this framework, systems trained from a collection of (source, target, label) instances are evaluated based on their capability to predict the correct label3 for new, unseen test items. Compared to our application scenario, the shared tasks setting differs in two main aspects. 2The HTER is the minimum edit distance between a translation suggestion and its manually post-edited version in the [0,1] interval. Edit distance is calculated as the number of edits (word insertions, deletions, substitutions, and shifts) divided by the number of words in the reference. 3Possible label types include post-editing effort scores (e.g. 1-5 Likert scores indicating the estimated percentage of MT output that has to be corrected), HTER values, and post-editing time (e.g. seconds per word). First, the data used are substantially homogeneous (usually they come from the same domain, and target translations are produced by the same MT system). Second, training and test are carried out as distinct, sequential phases. Instead, in the CAT environment, a QE component should ideally serve, adapt to and continuously learn from simultaneous translation jobs involving different MT engines, domains, genres and users (Turchi et al., 2013). These challenges have been separately addressed from different perspectives in few recent works. Huang et al. (2014) proposed a method to adaptively train a QE model for documentspecific MT post-editing. Adaptability, however, is achieved in a batch fashion, by re-training an ad hoc QE component for each document to be translated. The adaptive approach proposed by Turchi et al. (2014) overcomes the limitations of batch methods by applying an online learning protocol to continuously learn from a stream of (potentially heterogeneous) data. Experimental results suggest the effectiveness of online learning as a way to exploit user feedback to tailor QE predictions to their quality standards and to cope with the heterogeneity of data coming from different domains. However, though robust to user and domain changes, the method is solely driven by the distance computed between predicted and true labels, and it does not exploit any notion of similarity between tasks (e.g. domains, users, MT engines). On the other way round, task relatedness is successfully exploited by Cohn and Specia (2013), who apply multitask learning to jointly learn from data obtained from several annotators with different levels of expertise and reliability. A similar approach is adopted by de Souza et al. (2014a), who apply multitask learning to cope with situations in which a QE model has to be trained with scarce data from multiple domains/genres, different from the actual test domain. The two methods significantly outperform both individual single-task (indomain) models and single pooled models. However, operating in batch learning mode, none of them provides the continuous learning capabilities desirable in the CAT framework. The idea that online and multitask learning can complement each other if combined is suggested by (de Souza et al., 2014b), who compared the two learning paradigms in the same experimental setting. So far, however, empirical evidence of this complementarity is still lacking. 220 3 Online Multitask Learning for QE Online learning takes place in a stepwise fashion. At each step, the learner processes an instance (in our case a feature vector extracted from source and target sentences) and predicts a label for it (in our case an HTER value). After the prediction, the learner receives the “true” label (in our case the actual HTER computed from a human post-edition) and computes a loss that indicates the distance between the predicted and the true label. Before going to the next step, the weights are updated according to the suffered loss. Multitask learning (MTL) aims to simultaneously learn models for a set of possibly related tasks by exploiting their relationships. By doing this, improved generalization capabilities are obtained over models trained on the different tasks in isolation (single-task learning – STL). The relationships among tasks are provided by a shared structure, which can encode three types of relationships based on their correlation (Zhang and Yeung, 2010). Positive correlation indicates that the tasks are related and knowledge transfer should lead to similar model parameters. Negative correlation indicates that the tasks are likely to be unrelated and knowledge transfer should force an increase in the distance between model parameters. No correlation indicates that the tasks are independent and no knowledge transfer should take place. In our case, a task is a set of (instance, label) pairs obtained from source sentences coming from different translation jobs, together with their translations produced by several MT systems and the relative post-editions from various translators. In this paper the terms task and domain are used interchangeably. Early MTL methods model only positive correlation (Caruana, 1997; Argyriou et al., 2008), which results in a positive knowledge transfer between all the tasks, with the risk of impairing each other’s performance when they are unrelated or negatively correlated. Other methods (Jacob et al., 2009; Zhong and Kwok, 2012; Yan et al., 2014) cluster tasks into different groups and share knowledge only among those in the same cluster, thus implicitly identifying outlier tasks. A third class of algorithms considers all the three types of relationships by learning task interaction via the covariance of task-specific weights (Bonilla et al., 2008; Zhang and Yeung, 2010). All these methods, however, learn the task relationships in batch mode. To overcome this limitation, recent works propose the “lifelong learning” paradigm (Eaton and Ruvolo, 2013; Ruvolo and Eaton, 2014), in which all the instances of a task are given to the learner sequentially and the previously learned tasks are leveraged to improve generalization for future tasks. This approach, however, is not applicable to our scenario as it assumes that all the instances of each task are processed as separate blocks. In this paper we propose a novel MTL algorithm for QE that learns the structure shared by different tasks in an online fashion and from an input stream of instances from all the tasks. To this aim, we extend the online passive aggressive (PA) algorithm (Crammer et al., 2006) to the multitask scenario, learning a set of task-specific regression models. The multitask component of our method is given by an “interaction matrix” that defines to which extent each encoded task can “borrow” and “lend” knowledge from and to the other tasks. Opposite to previous methods (Cavallanti et al., 2010) that assume fixed dependencies among tasks, we propose to learn the interaction matrix instanceby-instance from the data. To this aim we follow the recent work of Saha et al. (2011), extending it to a regression setting. The choice of PA is motivated by practical reasons. Indeed, by providing the best trade-off between accuracy and computational time (He and Wang, 2012) compared to other algorithms such as OnlineSVR (Parrella, 2007), it represents a good solution to meet the demand of efficiency posed by the CAT framework. 3.1 Passive Aggressive Algorithm PA follows the typical online learning protocol. At each round t the learner receives an instance, xt ∈ Rd (d is the number of features), and predicts the label ˆyt according to a function parametrized by a set weights wt ∈Rd. Next, the learner receives the true label yt, computes the ϵ-insensitive loss, ℓϵ, measuring the deviation between the prediction ˆyt and the true label yt and updates the weights. The weights are updated by solving the optimization problem: wt = arg min w CP A(w) + Cξ (1) s.t. ℓϵ(w, (xt, yt)) ≤ξ and ξ ≥0 where CPA(w) = 1 2||w −wt−1||2 and ℓϵ is the ϵ-insensitive hinge loss defined as: 221 ℓϵ(w, (x, y)) = ( 0, if |y −w · x| ≤ϵ |y −w · x| −ϵ, otherwise (2) The loss is zero when the absolute difference between the prediction and the true label is smaller or equal to ϵ, and grows linearly with this difference otherwise. The ϵ parameter is given as input and regulates the sensitivity to mistakes. The slack variable ξ acts as an upper-bound to the loss, while the C parameter is introduced to control the aggressiveness of the weights update. High C values lead to more aggressive weight updates. However, when the labels present some degree of noise (a common situation in MT QE), they might cause the learner to drastically change the weight vector in a wrong direction. In these situations, setting C to small values is desirable. As shown in (Crammer et al., 2006), a closed form solution for the weights update in Eq.1 can be derived as: wt = wt−1 + sgn(yt −ˆyt)τtxt (3) with τt = min(C, ℓt ||xt||2 ) and ℓt = ℓϵ(w, (xt, yt)). 3.2 Passive Aggressive MTL Algorithm Our Passive Aggressive Multitask Learning (PAMTL) algorithm extends the traditional PA for regression to multitask learning. Our approach is inspired by the Online Task Relationship Learning algorithm proposed by Saha et al. (2011) which, however, is only defined for classification. The learning process considers one instance at each round t. The random sequence of instances belongs to a fixed set of K tasks and the goal of the algorithm is to learn K linear models, one for each task, parametrized by weight vectors ewt,k, k ∈ {1, . . . , K}. Moreover, the algorithm also learns a positive semidefinite matrix Ω∈RK×K, modeling the relationship among tasks. Algorithm 1 summarizes our approach. At each round t, the learner receives a pair (xt, it) where xt ∈Rd is an instance and it ∈{1, . . . , K} is the task identifier. Each incoming instance is transformed to a compound vector φt = [0, . . . , 0, xt, 0, . . . , 0] ∈RKd. Then, the algorithm predicts the HTER score corresponding to the label ˆy by using the weight vector ewt. The weight vector is a compound vector ewt = [ewt,1, . . . , ewt,K] ∈RKd, where ewt,k ∈ Rd , k ∈{1, . . . , K}. Next, the learner receives the true HTER label y and computes the loss ℓϵ (Eq. 2) for round t. Algorithm 1 PA Multitask Learning (PAMTL) Input: instances from K tasks, number of rounds R > 0, ϵ > 0, C > 0 Output: w and Ω, learned after T rounds Initialization: Ω= 1 K × Ik, w = 0 for t = 1 to T do receive instance (xt, it) compute φt from xt predict HTER ˆyt = ( ewT t · φt) receive true HTER label yt compute ℓt (Eq. 2) compute τt = min(C, ℓt ||φt||2 ) /* update weights */ ewt = ewt−1 + sgn(yt −ˆyt)τt(Ωt−1 ⊗Id)−1φt /* update task matrix */ if t > R then update Ωt with Eq. 6 or Eq. 7 end if end for We propose to update the weights by solving: ewt, Ωt = argmin w,Ω≻0 CMT L(w, Ω) + Cξ + D(Ω, Ωt−1) s.t. ℓϵ(w, (xt, yt)) ≤ξ, ξ ≥0 (4) The first term models the joint dependencies between the task weights and the interaction matrix and it is defined as CMTL(w, Ω) = 1 2(w −ewt)T Ω⊗(w −ewt), where Ω⊗= Ω⊗ Id. The function D(·) represents the divergence between a pair of positive definite matrices. Similar to (Saha et al., 2011), to define D(·) we also consider the family of Bregman divergences and specifically the LogDet and the Von Neumann divergences. Given two matrices X, Y ∈ Rn×n, the LogDet divergence is DLD(X, Y) = tr(XY−1) −log |XY−1| −n, while the Von Neumann divergence is computed as DV N(X, Y) = tr(X log X−Y log Y−X+Y). The optimization process to solve Eq.4 is performed with an alternate scheme: first, with a fixed Ω, we compute w; then, given w we optimize for Ω. The closed-form solution for updating w, which we derived similarly to the PA update (Crammer et al., 2006), becomes: ewt = ewt−1 + sgn(yt −ˆyt)τt(Ωt−1 ⊗Id)−1φt (5) In practice, the interaction matrix works as a learning rate when updating the weights of each task. Similarly, following previous works (Tsuda et al., 2005), the update steps for the interaction matrix Ωcan be easily derived. For the Log-Det divergence we have: Ωt = (Ωt−1 + η sym(f WT t−1 f Wt−1))−1 (6) 222 while for the Von Neumann we obtain: Ωt = exp(log Ωt−1 −η sym(f WT t−1 f Wt−1)) (7) where f Wt ∈Rd×K is a matrix obtained by column-wise reshaping the weight vector ewt, sym(X) = (X + XT )/2 and η is the learning rate parameter. The sequence of steps to compute Ωt and ewt is summarized in Algorithm 1. Importantly, the weight vector is updated at each round t, while Ωt is initialized to a diagonal matrix and it is only computed after R iterations. In this way, at the beginning, the tasks are assumed to be independent and the task-specific regression models are learned in isolation. Then, after R rounds, the interaction matrix is updated and the weights are refined considering tasks dependencies. This leads to a progressive increase in the correlation of weight vectors of related tasks. In the following, PAMTLvn refers to PAMTL with the Von Neumann updates and PAMTLld to PAMTL with LogDet updates. 4 Experimental Setting In this section, we describe the data used in our experiments, the features extracted from the source and target sentences, the evaluation metric and the baselines used for comparison. Data. We experiment with English-French datasets coming from Technology Entertainment Design talks (TED), Information Technology manuals (IT) and Education Material (EM). All datasets provide a set of tuples composed by (source, translation and post-edited translation). The TED dataset is distributed in the Trace corpus4 and includes, as source sentences, the subtitles of several talks spanning a range of topics presented in the TED conferences. Translations were generated by two different MT systems: a phrase-based statistical MT system and a commercial rule-based system. Post-editions were collected from four different translators, as described by Wisniewski et al. (2013). The IT manuals data come from two language service providers, henceforth LSP1 and LSP2. The ITLSP1 tuples belong to a software manual translated by an SMT system trained using the Moses toolkit (Koehn et al., 2007). The posteditions were produced by one professional trans4http://anrtrace.limsi.fr/trace_ postedit.tar.bz2 Domain No. Vocab. Avg. Snt. tokens Size Length TED src 20,048 3,452 20 TED tgt 21,565 3,940 22 ITLSP1 src 12,791 2,013 13 ITLSP1 tgt 13,626 2,321 13 EM src 15,327 3,200 15 EM tgt 17,857 3,149 17 ITLSP2 src 15,128 2,105 13 ITLSP2 tgt 17,109 2,104 14 Table 1: Data statistics for each domain. lator. The ITLSP2 data includes a software manual from the automotive industry; its source sentences are translated with an adaptive proprietary MT system and post-edited by several professional translators. The EM corpus is also provided by LSP2 and regards educational material (e.g. courseware and assessments) of various text styles. The translations and post-editions are produced in the same way as for ITLSP2. The ITLSP2 and the EM datasets are derived from the Autodesk Post-Editing Data corpus.5 In total, we end up with four domains (TED, ITLSP1, EM and ITLSP2), which allows us to evaluate the PAMTL algorithm in realistic conditions where the QE component is exposed to a continuous stream of heterogeneous data. Each domain is composed by 1,000 tuples formed by: i) the English source sentence, ii) its automatic translation in French, and iii) a real-valued quality label obtained by computing the HTER between the translation and the post-edition with the TERCpp open source tool.6 Table 1 reports some macro-indicators (number of tokens, vocabulary size, average sentence length) that give an idea about the similarities and differences between domains. Although they contain data from different software manuals, similar vocabulary size and sentence lengths for the two IT domains seem to reflect some commonalities in their technical style and jargon. Larger values for TED and EM evidence a higher lexical variability in the topics that compose these domains and the expected stylistic differences featured by speech transcriptions and non-technical writing. Overall, these numbers suggest a possible dissimilar5https://autodesk.app.box.com/ Autodesk-PostEditing 6http://sourceforge.net/projects/ tercpp/ 223 Figure 1: Validation curves for the R parameter. ity between ITLSP1 and ITLSP2 and the other two domains, which might make knowledge transfer across them more difficult and QE model reactivity to domain changes particularly important. Features. Our models are trained using the 17 baseline features proposed in (Specia et al., 2009), extracted with the online version of the QuEst feature extractor (Shah et al., 2014). These features take into account the complexity of the source sentence (e.g. number of tokens, number of translations per source word) and the fluency of the translation (e.g. language model probabilities). Their description is available in (Callison-Burch et al., 2012). The results of previous WMT QE shared tasks have shown that these features are particularly competitive in the HTER prediction task. Baselines. We compare the performance of PAMTL against three baselines: i) pooling mean, ii) pooling online single task learning (STLpool) and iii) in-domain online single task learning (STLin). The pooling mean is obtained by assigning a fixed prediction value to each test point. This value is the average HTER computed on the entire pool of training data. Although assigning the same prediction to each test instance would be useless in real applications, we compare against the mean baseline since it is often hard to beat in regression tasks, especially when dealing with heterogeneous data distributions (Rubino et al., 2013). The two online single task baselines implement the PA algorithm described in Section 3.1. The choice of PA is to make them comparable to our method, so that we can isolate more precisely the contribution of multitask learning. STLpool results are obtained by a single model trained on the entire Figure 2: Learning curves for all the domains, computed by calculating the mean MAE (↓) of the four domains. pool of available training data presented in random order. STLin results are obtained by separately training one model for each domain. These represent two alternative strategies for the integration of QE in the CAT framework. The former would allow a single model to simultaneously support multiple translation jobs in different domains, without any notion about their relations. The latter would lead to a more complex architecture, organized as a pool of independent, specialized QE modules. Evaluation metric. The performance of our regression models is evaluated in terms of mean absolute error (MAE), a standard error measure for regression problems commonly used also for QE (Callison-Burch et al., 2012). The MAE is the average of the absolute errors ei = |ˆyi −yi|, where ˆyi is the prediction of the model and yi is the true value for the ith instance. As it is an error measure, lower values indicate better performance (↓). 5 Results and Discussion In this Section we evaluate the proposed PAMTL algorithm. First, by analyzing how the number of rounds R impacts on the performance of our approach, we empirically find the value that will be used to train the model. Then, the learned model is run on test data and compared against the baselines. Performance is analyzed both by averaging the MAE results computed on all the domains, and by separately discussing in-domain behavior. Finally, the capability of the algorithm to learn task correlations and, in turn, transfer knowledge across them, is analysed by presenting the correla224 Figure 3: Learning curves showing MAE (↓) variations for each domain. tion matrix of the task weights. For the evaluation, we uniformly sample 700 instances from each domain for training, leaving the remaining 300 instances for test. The training sets of all the domains are concatenated and shuffled to create a random sequence of points. To investigate the impact of different amounts of data on the learning process, we create ten subsets of 10 to 100% of the training data. We optimize the parameters of all the models with a grid search procedure using 5-fold cross-validation. This process is repeated for 30 different train/test splits over the whole data. Results are presented with 95% confidence bands.7 Analysis of the R parameter. We empirically study the influence of the number of instances required to start updating the interaction matrix (the R parameter in Algorithm 1). For that, we perform a set of experiments where R is initialized with nine different values (expressed as percentage of training data). Figure 1 shows the validation curves obtained in cross-validation over the training data using the LogDet and Von Neumann updates. The curves report the performance (MAE) difference between STLin and PAMTLld 7Confidence bands are used to show whether performance differences between the models are statistically significant. (black curve) and STLin and PAMTLvn (grey curve). The higher the difference, the better. The PAMTLvn curve differs from PAMTLld one only for small values of R (< 20), showing that the two divergences are substantially equivalent. It is interesting to note that with only 20% of the training data (R = 20), PAMTL is able to find a stable set of weights and to effectively update the interaction matrix. Larger values of R harm the performance, indicating that the interaction matrix updates require a reasonable amount of points to reliably transfer knowledge across tasks. We use this observation to set R for our final experiment, in which we evaluate the methods over the test data. Evaluation on test data. Global evaluation results are summarized in Figure 2, which shows five curves: one for each baseline (Mean, STLin, STLpool) and two for the proposed online multitask method (PAMTLvn and PAMTLld). The curves are computed by calculating the average MAE achieved with different amounts of data on each domain’s test set. The results show that PAMTLld and PAMTLvn have similar trends (confirming the substantial equivalence previously observed), and that both outperform all the baselines in a statistically significant manner. This holds for all the training set 225 sizes we experimented with. The maximum improvement over the baselines (+1.3 MAE) is observed with 60% of the training data when comparing PAMTLvn with STLin. Even if this is the best baseline, also with 100% of the data its results are not competitive and of limited interest with respect to our application scenario (the integration of effective QE models in the CAT framework). Indeed, despite the STLin downward error trend, it’s worth remarking that an increased competitiveness would come at the cost of: i) collecting large amounts of annotated data and ii) integrating the model in a complex CAT architecture organized as a pool of independent QE components. Under the tested conditions, it is also evident that the alternative strategy of using a single QE component to simultaneously serve multiple translation jobs is not viable. Indeed, STLpool is the worst performing baseline, with a constant distance of around 2 MAE points from the best PAMTL model for almost all the training set sizes. The fact that, with increasing amounts of data, the STLpool predictions get close to those of the simple mean baseline indicates its limitations to cope with the noise introduced by a continuous stream of diverse data. The capability to handle such stream by exploiting task relationships makes PAMTL a much better solution for our purposes. Per-domain analysis. Figure 3 shows the MAE results achieved on each target domain by the most competitive baseline (STLin) and the proposed online multitask method (PAMTLvn, PAMTLld). For all the domains, the behavior of PAMTLld and PAMTLvn is consistent and almost identical. With both divergences, the improvement of PAMTL over online single task learning becomes statistically significant when using more than 30% of the training data (210 instances). Interestingly, in all the plots, with 20% of the training data (140 instances for each domain, i.e. a total of 560 instances adding data from all the domains), PATML results are comparable to those achieved by STLin with 80% of the training data (i.e. 560 in-domain instances). This confirms that PATML can effectively leverage data heterogeneity, and that a limited amount of in-domain data is sufficient to make it competitive. Nevertheless, for all domains except EM, the PATML and STLin curves converge to comparable performance when trained with 100% of the data. This is not surprising if we consider that EM has a varied vocabulary Figure 4: Correlation among the weights predicted by PATMLvn using all the training data. (see Table 1), which may be evidence of the presence of different topics, increasing its similarity with other domains. The same assumption should also hold for TED, given that its source sentences belong to talks about different topics. The results for the TED domain, however, do not present the same degree of improvement as for EM. To better understand the relationships learned by the PAMTL models, we compute the correlation between the weights inferred for each domain (as performed by Saha et al. (2011)). Figure 4 shows the correlations computed on the task weights learned by PATMLvn with all the training data. In the matrix, EM is the domain that presents the highest correlation with all the others. Instead, TED and ITLSP2 are the less correlated with the other domains (even though, being close to the other IT domain, ITLSP2 can share knowledge with it). This explains why the improvement measured on TED is smaller compared to EM. Although there is no canonical way to measure correlation among domains, the weights correlation matrix and the improvements achieved by PAMTL show the capability of the method to identify task relationships and exploit them to improve the generalization properties of the model. 6 Conclusion We addressed the problem of developing quality estimation models suitable for integration in computer-assisted translation technology. In this framework, on-the-fly MT quality prediction for a stream of heterogeneous data coming from different domains/users/MT systems represents a major challenge. On one side, processing such stream calls for supervised solutions that avoid the bot226 tleneck of periodically retraining the QE models in a batch fashion. On the other side, handling data heterogeneity requires the capability to leverage data similarities and dissimilarities. While previous works addressed these two problems in isolation, by proposing approaches respectively based on online and multitask learning, our solution unifies the two paradigms in a single online multitask approach. To this aim, we developed a novel regression algorithm, filling a gap left by current online multitask learning methods that only operate in classification mode. Our approach, which is based on the passive aggressive algorithm, has been successfully evaluated against strong online single-task competitors in a scenario involving four domains. Our future objective is to extend our evaluation to streams of data coming from a larger number of domains. Finding reasonably-sized datasets for this purpose is currently difficult. However, we are confident that the gradual shift of the translation industry towards human MT post-editing will not only push for further research on these problems, but also provide data for larger scale evaluations in a short time. To allow for replicability of our results and promote further research on QE, the features extracted from our data, the computed labels and the source code of the method are available at https://github.com/jsouza/pamtl. Acknowledgements This work has been partially supported by the ECfunded H2020 project QT21 (grant agreement no. 645452). The authors would like to thank Dr. Ventsislav Zhechev for his support with the Autodesk Post-Editing Data corpus. References Andreas Argyriou, Theodoros Evgeniou, and Massimiliano Massimo Pontil. 2008. Convex multi-task feature learning. Machine Learning, 73(3):243– 272, January. Nguyen Bach, F. Huang, and Y. Al-Onaizan. 2011. Goodness: A method for measuring machine translation confidence. In 49th Annual Meeting of the Association for Computational Linguistics. Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Eighth Workshop on Statistical Machine Translation, pages 1–44, Sofia, Bulgaria, August. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, USA, June. Edwin Bonilla, Kian Ming Chai, and Christopher Williams. 2008. Multi-task Gaussian Process Prediction. In Advances in Neural Information Processing Systems 20: NIPS’08. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 Workshop on Statistical Machine Translation. In Proceedings of the 7th Workshop on Statistical Machine Translation, pages 10– 51, Montr´eal, Canada, June. Rich Caruana. 1997. Multitask learning. In Machine Learning, pages 41–75. Giovanni Cavallanti, N Cesa-Bianchi, and C Gentile. 2010. Linear algorithms for online multitask classification. The Journal of Machine Learning Research, 11:2901–2934. Trevor Cohn and Lucia Specia. 2013. Modelling Annotator Bias with Multi-task Gaussian Processes: An application to Machine Translation Quality Estimation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 32–42, Sofia, Bulgaria, August. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online Passive-Aggressive Algorithms. The Journal of Machine Learning Research, 7:551–585. Jos´e G. C. de Souza, Marco Turchi, and Matteo Negri. 2014a. Machine Translation Quality Estimation Across Domains. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 409– 420, Dublin, Ireland, August. Jos´e G. C. de Souza, Marco Turchi, and Matteo Negri. 2014b. Towards a Combination of Online and Multitask Learning for MT Quality Estimation: a Preliminary Study. In Proceedings of Workshop on Interactive and Adaptive Machine Translation in 2014 (IAMT 2014), Vancouver, BC, Canada, October. Eric Eaton and PL Ruvolo. 2013. ELLA: An efficient lifelong learning algorithm. In Proceedings of the 30th International Conference on Machine Learning, pages 507–515, Atlanta, Georgia, USA, June. Marcello Federico, Nicola Bertoldi, Mauro Cettolo, Matteo Negri, Marco Turchi, Marco Trombetti, Alessandro Cattelan, Antonio Farina, Domenico 227 Lupinetti, Andrea Martines, Alberto Massidda, Holger Schwenk, Lo¨ıc Barrault, Frederic Blain, Philipp Koehn, Christian Buck, and Ulrich Germann. 2014. THE MATECAT TOOL. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: System Demonstrations, pages 129–132, Dublin, Ireland, August. Fei Huang, Jian-Ming Xu, Abraham Ittycheriah, and Salim Roukos. 2014. Adaptive HTER Estimation for Document-Specific MT Post-Editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 861–870, Baltimore, Maryland, June. Laurent Jacob, Jean-philippe Vert, Francis R Bach, and Jean-philippe Vert. 2009. Clustered Multi-Task Learning: A Convex Formulation. In D Koller, D Schuurmans, Y Bengio, and L Bottou, editors, Advances in Neural Information Processing Systems 21, pages 745–752. Curran Associates, Inc. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zenz, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL 2007 Demo and Poster Sessions, pages 177– 180, Prague, Czech Republic, June. Yashar Mehdad, Matteo Negri, and Marcello Federico. 2012. Match without a Referee: Evaluating MT Adequacy without Reference Translations. In Proceedings of the Machine Translation Workshop (WMT2012), pages 171–180, Montr´eal, Canada, June. Francesco Parrella. 2007. Online support vector regression. Master’s Thesis, Department of Information Science, University of Genoa, Italy. Raphael Rubino, Jos´e G. C. de Souza, and Lucia Specia. 2013. Topic Models for Translation Quality Estimation for Gisting Purposes. In Machine Translation Summit XIV, pages 295–302. Paul Ruvolo and Eric Eaton. 2014. Online Multi-Task Learning via Sparse Dictionary Optimization. In Proceedings of the 28th AAAI Conference on Artificial Intelligence (AAAI-14), Qu´ebec City, Qu´ebec, Canada, July. Avishek Saha, Piyush Rai, Hal Daum´e, and Suresh Venkatasubramanian. 2011. Online Learning of Multiple Tasks and their Relationships. In Proceedings of the 14th International Conference on Artificial Intelligence and Statistics (AISTATS), Fort Lauderdale, FL, USA, April. Kashif Shah, Marco Turchi, and Lucia Specia. 2014. An Efficient and User-friendly Tool for Machine Translation Quality Estimation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation, Reykjavik, Iceland, May. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A Study of Translation Edit Rate with Targeted Human Annotation. In Association for Machine Translation in the Americas, Cambridge, MA, USA, August. Radu Soricut and A Echihabi. 2010. Trustrank: Inducing trust in automatic translations via ranking. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, number July, pages 612–621. Lucia Specia, Nicola Cancedda, Marc Dymetman, Marco Turchi, and Nello Cristianini. 2009. Estimating the Sentence-Level Quality of Machine Translation Systems. In Proceedings of the 13th Annual Conference of the EAMT, pages 28–35, Barcelona, Spain, May. Koji Tsuda, Gunnar R¨atsch, and Manfred K Warmuth. 2005. Matrix exponentiated gradient updates for online learning and bregman projection. In Journal of Machine Learning Research, pages 995–1018. Marco Turchi, Matteo Negri, and Marcello Federico. 2013. Coping with the Subjectivity of Human Judgements in MT Quality Estimation. In Proceedings of the Eighth Workshop on Statistical Machine Translation (WMT), pages 240–251, Sofia, Bulgaria, August. Marco Turchi, Antonios Anastasopoulos, Jos´e G. C. de Souza, and Matteo Negri. 2014. Adaptive Quality Estimation for Machine Translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 710–720, Baltimore, Maryland, USA, June. Guillaume Wisniewski, Anil Kumar Singh, Natalia Segal, and Franc¸ois Yvon. 2013. Design and Analysis of a Large Corpus of Post-Edited Translations: Quality Estimation, Failure Analysis and the Variability of Post-Edition. In Machine Translation Summit XIV, pages 117–124. Yan Yan, Elisa Ricci, Ramanathan Subramanian, Gaowen Liu, and Nicu Sebe. 2014. Multitask linear discriminant analysis for view invariant action recognition. IEEE Transactions on Image Processing, 23(12):5599–5611. Yu Zhang and Dit-yan Yeung. 2010. A Convex Formulation for Learning Task Relationships in Multi-Task Learning. In Proceedings of the Twenty-Sixth Conference Annual Conference on Uncertainty in Artificial Intelligence (UAI-10), pages 733–742, Catalina Island, CA, USA, July. Leon Wenliang Zhong and James T. Kwok. 2012. Convex multitask learning with flexible task clusters. In Proceedings of the 29 th International Conference on Machine Learning, Edinburgh, Scotland, June. 228
2015
22
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 229–238, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Context-Aware Topic Model for Statistical Machine Translation Jinsong Su1, Deyi Xiong2∗, Yang Liu3, Xianpei Han4, Hongyu Lin1, Junfeng Yao1, Min Zhang2 Xiamen University, Xiamen, China1 Soochow University, Suzhou, China2 Tsinghua University, Beijing, China3 Institute of Software, Chinese Academy of Sciences, Beijing, China4 {jssu, hylin, yao0010}@xmu.edu.cn {dyxiong, minzhang}@suda.edu.cn [email protected] [email protected] Abstract Lexical selection is crucial for statistical machine translation. Previous studies separately exploit sentence-level contexts and documentlevel topics for lexical selection, neglecting their correlations. In this paper, we propose a context-aware topic model for lexical selection, which not only models local contexts and global topics but also captures their correlations. The model uses target-side translations as hidden variables to connect document topics and source-side local contextual words. In order to learn hidden variables and distributions from data, we introduce a Gibbs sampling algorithm for statistical estimation and inference. A new translation probability based on distributions learned by the model is integrated into a translation system for lexical selection. Experiment results on NIST ChineseEnglish test sets demonstrate that 1) our model significantly outperforms previous lexical selection methods and 2) modeling correlations between local words and global topics can further improve translation quality. 1 Introduction Lexical selection is a very important task in statistical machine translation (SMT). Given a sentence in the source language, lexical selection statistically predicts translations for source words, based on various translation knowledge. Most conventional SMT systems (Koehn et al., 2003; Galley et al., 2006; Chiang, 2007) exploit very limited context information contained in bilingual rules for lexical selection. ∗Corresponding author. {stance, attitude ...}lìchǎng duì gāi wèntí zhōngguó bǎochí zhōnglì lìchǎng [Economy topic, Politics topic ...] {problem, issue ...}wèntí Figure 1: A Chinese-English translation example to illustrate the effect of local contexts and global topics as well as their correlations on lexical selection. Each black line indicates a set of translation candidates for a Chinese content word (within a dotted box). Green lines point to translations that are favored by local contexts while blue lines show bidirectional associations between global topics and their consistent target-side translations. Previous studies that explore richer information for lexical selection can be divided into two categories: 1) incorporating sentence-level contexts (Chan et al., 2007; Carpuat and Wu, 2007; Hasan et al., 2008; Mauser et al., 2009; He et al., 2008; Shen et al., 2009) or 2) integrating document-level topics (Xiao et al., 2011; Ture et al., 2012; Xiao et al., 2012; Eidelman et al., 2012; Hewavitharana et al., 2013; Xiong et al., 2013; Hasler et al., 2014a; Hasler et al., 2014b) into SMT. The methods in these two strands have shown their effectiveness on lexical selection. However, correlations between sentence- and document-level contexts have never been explored before. It is clear that local contexts and global topics are often highly correlated. Consider a ChineseEnglish translation example presented in Figure 1. On the one hand, if local contexts suggest that the source word “á|/l`ıchˇang” should be translated in229 to “stance”, they will also indicate that the topic of the document where the example sentence occurs is about politics. The politics topic can be further used to enable the decoder to select a correct translation “issue” for another source word “¯ K/w`entˇi”, which is consistent with this topic. On the other hand, if we know that this document mainly focuses on the politics topic, the candiate translation “stance” will be more compatible with the context of “á|/l`ıchˇang” than the candiate translation “attitude”. This is because neighboring sourceside words “¥I/zh¯ongu´o” and “¥á/zh¯ongl`ı” often occur in documents that are about international politics. We believe that such correlations between local contextual words and global topics can be used to further improve lexical selection. In this paper, we propose a unified framework to jointly model local contexts, global topics as well as their correlations for lexical selection. Specifically, • First, we present a context-aware topic model (CATM) to exploit the features mentioned above for lexical selection in SMT. To the best of our knowledge, this is the first work to jointly model both local and global contexts for lexical selection in a topic model. • Second, we present a Gibbs sampling algorithm to learn various distributions that are related to topics and translations from data. The translation probabilities derived from our model are integrated into SMT to allow collective lexical selection with both local and global informtion. We validate the effectiveness of our model on a state-of-the-art phrase-based translation system. Experiment results on the NIST Chinese-English translation task show that our model significantly outperforms previous lexical selection methods. 2 Context-Aware Topic Model In this section, we describe basic assumptions and elaborate the proposed context-aware topic model. 2.1 Basic Assumptions In CATM, we assume that each source document d consists of two types of words: topical words which are related to topics of the document and contextual words which affect translation selections of topical words. As topics of a document are usually represented by content words in it, we choose source-side nouns, verbs, adjectives and adverbs as topical words. For contextual words, we use all words in a source sentence as contextual words. We assume that they are generated by target-side translations of other words than themselves. Note that a source word may be both topical and contextual. For each topical word, we identify its candidate translations from training corpus according to word alignments between the source and target language. We allow a target translation to be a phrase of length no more than 3 words. We refer to these translations of source topical words as target-side topical items, which can be either words or phrases. In the example shown in Figure 1, all source words within dotted boxes are topical words. Topical word “á|/l`ıchˇang” is supposed to be translated into a target-side topical item “stance”, which is collectively suggested by neighboring contextual words “ ¥I/zh¯onggu´o”, “¥á/zh¯ongl`ı” and the topic of the corresponding document. In our model, all target-side topical items in a document are generated according to the following two assumptions: • Topic consistency assumption: All target-side topical items in a document should be consistent with the topic distribution of the document. For example, the translations “issue”, “stance” tend to occur in documents about politics topic. • Context compatibility assumption: For a topical word, its translation (i.e., the counterpart target-side topical item) should be compatible with its neighboring contextual words. For instance, the translation “stance” of “á|/l`ıchˇang” is closely related to contextual words “¥I/zh¯ongu´o” and “¥á/zh¯ongl`ı”. 2.2 Model The graphical representation of CATM, which visualizes the generative process of training data D, is shown in Figure 2. Notations of CATM are presented in Table 1. In CATM, each document d can be generated in the following three steps1: 1In the following description, Dir(.), Mult(.) and Unif(.) denote Dirichlet, Multinomial and Uniform distributions, re230 Symbol Meaning α hyperparameter for θ β hyperparameter for φ γ hyperparameter for ψ δ hyperparameter for ξ f topical word c contextual word ˜e target-side topical item ˜e′ a sampled target-side topical item used to generate a source-side contextual word θ the topic distribution of document φ the distribution of a topic over target-side topical items ψ the translation probability distribution of a target-side topical item over source-side topical words ξ the generation probability distribution of a target-side topical item over source-side contextual words Nz topic number Nd document number Nf the number of topical words Nc the number of contextual words N˜e the number of target-side topical items Nf,d the number of topical words in d Nc,d the number of contextual words in d Table 1: Notations in CATM. 1. Sample a topic distribution θd∼Dir(α). 2. For each position i that corresponds to a topical word fi in the document: (a) Sample a topic zi∼Mult(θd). (b) Conditioned on the topic zi, sample a target-side topical item ˜ei∼Mult(φzi). (c) Conditioned on the target-side topical item ˜ei, sample the topical word fi∼Mult(ψ˜ei). 3. For each position j that corresponds to a contextual word cj in the document: (a) Collect all target-side topical items ˜es that are translations of neighboring topical words within a window centered at cj (window size ws). (b) Randomly sample an item from ˜es, ˜e′ j∼Unif(˜es). (c) Conditioned on the sampled target-side topical item ˜e′ j, sample the contextual word cj∼Mult(ξ˜e′ j). To better illustrate CATM, let us revisit the example in Figure 1. We describe how CATM generates topspectively. Nd Nc,d Nf,d Nee α θ z ee ee′ f c ψ γ δ ξ Nz β ϕ Figure 2: Graphical representation of our model. ical words “¯K/w`ent´ı”, “á|/l`ıchˇang”, and contextual word “¥á/zh¯ongl`ı” in the following steps: Step 1: The model generates a topic distribution for the corresponding document as {economy0.25, politics0.75}. Step 2: Based on the topic distribution, we choose “economy” and “politics” as topic assignments for “¯K/w`ent´ı” and “á|/l`ıchˇang” respectively; Then, according to the distributions of the two topics over target-side topical items, we generate target-side topical items “issue” and “stance”; Finally, according to the translation probability distributions of these two topical items over source-side topical words, we generate source-side topical words “¯K/w`ent´ı” and “á|/l`ıchˇang” for them respectively. Step 3: For the contextual word “¥á/zh¯ongl`ı”, we first collect target-side topical items of its neighboring topical words such as “¯ K/w`ent´ı”, “ ±/bˇaoch´ı” and “á|/l`ıchˇang” to form a targetside topical item set {“issue”,“keep”, “stance”}, from which we randomly sample one item “stance”. Next, according to the generation probability distribution of “stance” over source contextual words, we finally generate the source contextual word “¥ á/zh¯ongl`ı”. In the above generative process, all target-side topical items are generated from the underlying topics of a source document, which guarantees that selected target translations are topic-consistent. Ad231 ditionally, each source contextual word is derived from a target-side topical item given its generation probability distribution. This makes selected target translations also compatible with source-side local contextual words. In this way, global topics, topical words, local contextual words and target-side topical items are highly correlated in CATM that exactly captures such correlations for lexical selection. 3 Parameter Estimation and Inference We propose a Gibbs sampling algorithm to learn various distributions described in the previous section. Details of the learning and inference process are presented in this section. 3.1 The Probability of Training Corpus According to CATM, the total probability of training data D given hyperparameters α, β, γ and δ is computed as follows: P(D; α, β, γ, δ) = Q d P(fd, cd; α, β, γ, δ) = Q d P ˜ed P(˜ed|α, β)P(fd|˜ed, γ)P(cd|˜ed, δ) = R φ P(φ|β) R ψ P(ψ|γ) Q d P ˜ed P(fd|˜ed, ψ) × R ξ P(ξ|δ) P ˜e′ d P(˜e′ d|˜ed)p(cd|˜e′ d, ξ) × R θ P(θ|α)P(˜ed|θ, φ)dθdξdψdφ (1) where fd and ˜ed denote the sets of topical words and their target-side topical item assignments in document d, cd and ˜e′ d are the sets of contextual words and their target-side topical item assignments in document d. 3.2 Parameter Estimation via Gibbs Sampling The joint distribution in Eq. (1) is intractable to compute because of coupled hyperparameters and hidden variables. Following Han et al, (2012), we adapt the well-known Gibbs sampling algorithm (Griffiths and Steyvers, 2004) to our model. We compute the joint posterior distribution of hidden variables, denoted by P(z,˜e,˜e′|D), and then use this distribution to 1) estimate θ, φ, ψ and ξ, and 2) predict translations and topics of all documents in D. Specifically, we derive the joint posterior distribution from Eq. (1) as: P(z,˜e,˜e′|D) ∝P(z)P(˜e|z)P(f|˜e)P(˜e′|˜e)P(c|˜e′) (2) Based on the equation above, we construct a Markov chain that converges to P(z,˜e,˜e′|D), where each state is an assignment of a hidden variable (including topic assignment to a topical word, target-side topical item assignment to a source topical or contextual word.). Then, we sequentially sample each assignment according to the following three conditional assignment distributions: 1. P(zi = z|z−i,˜e,˜e′, D): topic assignment distribution of a topical word given z−i that denotes all topic assignments but zi, ˜e and ˜e′ that are target-side topical item assignments. It is updated as follows: P(zi = z|z−i,˜e,˜e′, D) ∝ CDZ (−i)dz + α CDZ (−i)d∗+Nzα × CZ ˜E (−i)z˜e + β CZ ˜E (−i)z∗+N˜eβ (3) where the topic assignment to a topical word is determined by the probability that this topic appears in document d (the 1st term) and the probability that the selected item ˜e occurs in this topic (the 2nd term). 2. P(˜ei = ˜e|z,˜e−i,˜e′, D): target-side topical item assignment distribution of a source topical word given the current topic assignments z, the current item assignments of all other topical words ˜e−i, and the current item assignments of contextual words ˜e′. It is updated as follows: P(˜ei = ˜e|z,˜e−i,˜e′, D) ∝ CZ ˜E (−i)z˜e + β CZ ˜E (−i)z∗+ N˜eβ × C ˜EF (−i)˜ef + γ C ˜EF (−i)˜e∗+ Nfγ × ( CW ˜E (−i)w˜e + 1 CW ˜E (−i)w˜e )CW ˜ E′ w˜e (4) where the target-side topical item assignment to a topical word is determined by the probability that this item is from the topic z (the 1st term), the probability that this item is translated into the topical word f (the 2nd term) and the probability of contextual words within a ws word window centered at the topical word f, which influence the selection of the target-side topical item ˜e (the 3rd term). It is very important to note that we use a parallel corpus to train the model. Therefore we directly identify target-side topical items for source topical words via word alignments rather than sampling. 232 3. P(˜e′ i = ˜e|z,˜e,˜e′ −i, D): target-side topical item assignment distribution for a contextual word given the current topic assignments z, the current item assignments of topical words ˜e, and the current item assignments of all other contextual words ˜e′ −i. It is updated as follows: P(˜e′ i = ˜e|z,˜e,˜e′ −i, D) ∝ CW ˜E w˜e CW ˜E w∗ × C ˜EC (−i)˜ec + δ C ˜EC (−i)˜e∗+ Nc δ (5) where the target-side topical item assignment used to generate a contextual word is determined by the probability of this item being assigned to generate contextual words within a surface window of size ws (the 1st term) and the probability that contextual words occur in the context of this item (the 2nd term). In all above formulas, CDZ dz is the number of times that topic z has been assigned for all topical words in document d, CDZ d∗=P z CDZ dz is the topic number in document d, and CZ ˜E z˜e , C ˜EF ˜ef , CW ˜E w˜e , CW ˜E′ w˜e and C ˜EC ˜ec have similar explanations. Based on the above marginal distributions, we iteratively update all assignments of corpus D until the constructed Markov chain converges. Model parameters are estimated using these final assignments. 3.3 Inference on Unseen Documents For a new document, we first predict its topics and target-side topical items using the incremental Gibbs sampling algorithm described in (Kataria et al., 2011). In this algorithm, we iteratively update topic assignments and translation assignments of an unseen document following the same process described in Section 3.2, but with estimated model parameters. Once we obtain these assignments, we estimate lexical translation probabilities based on the sampled counts of target-side topical items. Formally, for the position i in the document corresponding to the content word f, we collect the sampled count that translation ˜e generates f, denoted by Csam(˜e, f). This count can be normalized to form a new translation probability in the following way: p(˜e|f) = Csam(˜e, f) + k Csam + k · N˜e,f (6) where Csam is the total number of samples during inference and N˜e,f is the number of candidate translations of f. Here we apply add-k smoothing to refine this translation probability, where k is a tunable global smoothing constant. Under the framework of log-linear model (Och and Ney, 2002), we use this translation probability as a new feature to improve lexical selection in SMT. 4 Experiments In order to examine the effectiveness of our model, we carried out several groups of experiments on Chinese-to-English translation. 4.1 Setup Our bilingual training corpus is from the FBIS corpus and the Hansards part of LDC2004T07 corpus (1M parallel sentences, 54.6K documents, with 25.2M Chinese words and 29M English words). We first used ZPar toolkit2 and Stanford toolkit3 to preprocess (i.e., word segmenting, PoS tagging) the Chinese and English parts of training corpus, and then word-aligned them using GIZA++ (Och and Ney, 2003) with the option “grow-diag-final-and”. We chose the NIST evaluation set of MT05 as the development set, and the sets of MT06/MT08 as test sets. On average, these three sets contain 17.2, 13.9 and 14.1 content words per sentence, respectively. We trained a 5-gram language model on the Xinhua portion of Gigaword corpus using the SRILM Toolkit (Stolcke, 2002). Our baseline system is a state-of-the-art SMT system, which adapts bracketing transduction grammars (Wu, 1997) to phrasal translation and equips itself with a maximum entropy based reordering model (MEBTG) (Xiong et al., 2006). We used the toolkit4 developed by Zhang (2004) to train the reordering model with the following parameters: iteration number iter=200 and Gaussian prior g=1.0. During decoding, we set the ttable-limit as 20, the stack-size as 100. The translation quality is evaluated by case-insensitive BLEU-4 (Papineni et al., 2002) metric. Finally, we conducted paired bootstrap sampling (Koehn, 2004) to test the significance in BLEU score differences. 2http://people.sutd.edu.sg/∼yue zhang/doc/index.html 3http://nlp.stanford.edu/software 4http://homepages.inf.ed.ac.uk/lzhang10/maxenttoolkit.html 233 Model MT05 CATM (± 6w) 33.35 CATM (± 8w) 33.43 CATM (± 10w) 33.42 CATM (± 12w) 33.49 CATM (± 14w) 33.30 Table 2: Experiment results on the development set using different window sizes ws. To train CATM, we set the topic number Nz as 25.5 For hyperparameters α and β, we empirically set α=50/Nz and β=0.1, as implemented in (Griffiths and Steyvers, 2004). Following Han et al. (2012), we set γ and δ as 1.0/Nf and 2000/Nc, respectively. During the training process, we ran 400 iterations of the Gibbs sampling algorithm. For documents to be translated, we first ran 300 rounds in a burn-in step to let the probability distributions converge, and then ran 1500 rounds where we collected independent samples every 5 rounds. The longest training time of CATM is less than four days on our server using 4GB RAM and one core of 3.2GHz CPU. As for the smoothing constant k in Eq. (6), we set its values to 0.5 according to the performance on the development set in additional experiments. 4.2 Impact of Window Size ws Our first group of experiments were conducted on the development set to investigate the impact of the window size ws. We gradually varied window size from 6 to 14 with an increment of 2. Experiment results are shown in Table 2. We achieve the best performance when ws=12. This suggests that a ?12-word window context is sufficient for predicting target-side translations for ambiguous source-side topical words. We therefore set ws=12 for all experiments thereafter. 4.3 Overall Performance In the second group of experiments, in addition to the conventional MEBTG system, we also compared CATM with the following two models: Word Sense Disambiguation Model (WSDM) (Chan et al., 2007). This model improves lexical selection in SMT by exploiting local contexts. For 5We try different topic numbers from 25 to 100 with an increment of 25 each time. We find that Nz=25 produces a slightly better performance than other values on the development set. each content word, we construct a MaxEnt-based classifier incorporating local collocation and surrounding word features, which are also adopted by Chan et al. (2007). For each candidate translation ˜e of topical word f, we use WSDM to estimate the context-specific translation probability P(˜e|f), which is used as a new feature in SMT system. Topic-specific Lexicon Translation Model (TLTM) (Zhao and Xing, 2007). This model focuses on the utilization of document-level context. We adapted it to estimate a lexicon translation probability as follows: p(f|˜e, d) ∝p(˜e|f, d) · p(f|d) = P z p(˜e|f, z) · p(f|z) · p(z|d) (7) where p(˜e|f, z) is the lexical translation probability conditioned on topic z, which can be calculated according to the principle of maximal likelihood, p(f|z) is the generation probability of word f from topic z, and p(z|d) denotes the posterior topic distribution of document d. Note that our CATM is proposed for lexical selection on content words. To show the strong effectiveness of our model, we also compared it against the full-fledged variants of the above-mentioned two models that are built for all source words. We refer to them as WSDM (All) and TLTM (All), respectively. Table 3 displays BLEU scores of different lexical selection models. All models outperform the baseline. Although we only use CATM to predict translations for content words, CATM achieves an average BLEU score of 26.77 on the two test sets, which is higher than that of the baseline by 1.18 BLEU points. This improvement is statistically significant at p<0.01. Furthermore, we also find that our model performs better than WSDM and TLTM with significant improvements. Finally, even if WSDM (All) and TLTM (all) are built for all source words, they are still no better than than CATM that selects desirable translations for content words. These experiment results strongly demonstrate the advantage of CATM over previous lexical selection models. 5 Analysis In order to investigate why CATM is able to outperform previous models that explore only local contex234 Model Local Context Global Topic MT06 MT08 Avg Baseline × × 29.66↓↓ 21.52↓↓ 25.59 WSDM √ × 30.62↓ 22.05↓↓ 26.34 WSDM (All) √ × 30.92 22.27 26.60 TLTM × √ 30.27↓↓ 21.70↓↓ 25.99 TLTM (All) × √ 30.33↓↓ 21.58↓↓ 25.96 CATM √ √ 30.97 22.56 26.77 Table 3: Experiment results on the test sets. Avg = average BLEU scores. WSDM (All) and TLTM (All) are models built for all source words. ↓: significantly worse than CATM (p<0.05), ↓↓: significantly worse than CATM (p<0.01) . tual words or global topics, we take a deep look into topics, topical items and contextual words learned by CATM and empirically analyze the effect of modeling correlations between local contextual words and global topics on lexical selection. 5.1 Outputs of CATM We present some examples of topics learned by CATM in Table 4. We also list five target-side topical items with the highest probabilities for each topic, and the most probable five contextual words for each target-side topical item. These examples clearly show that target-side topical items tightly connect global topics and local contextual words by capturing their correlations. 5.2 Effect of Correlation Modeling Compared to previous lexical selection models, CATM jointly models both local contextual words and global topics. Such a joint modeling also enables CATM to capture their inner correlations at the model level. In order to examine the effect of correlation modeling on lexical selection, we compared CATM with its three variants:  CATM (Context) that only uses local context information. We determined target-side topical items for content words in this variant by setting the probability distribution that a topic generates a target-side topical item to be uniform;  CATM (Topic) that explores only global topic information. We identified target-side topical items for content words in the model by setting ws as 0, i.e., no local contextual words being used at all.  CATM (Log-linear) is the combination of the above-mentioned two variants ( and ) in a log-linear manner, which does not capture correlations between local contextual words and global topics at the model level. Model MT06 MT08 Avg CATM (Context) 30.46 ↓↓ 22.02 ↓↓ 26.24 CATM (Topic) 30.20 ↓↓ 21.90 ↓↓ 26.05 CATM (Log-linear) 30.59 ↓ 22.24 ↓ 26.42 CATM 30.97 22.56 26.77 Table 5: Experiment results on the test sets. CATM (Loglinear) is the combination of CATM (Context) and CATM (Topic) in a log-linear manner. Results in Table 5 show that CATM performs significantlly better than both CATM (Topic) and CATM (Context). Even compared with CATM (Loglinear), CATM still achieves a significant improvement of 0.35 BLEU points (p<0.05). This validates the effectiveness of capturing correlations for lexical selection at the model level. 6 Related Work Our work is partially inspired by (Han and Sun, 2012), where an entity-topic model is presented for entity linking. We successfully adapt this work to lexical selection in SMT. The related work mainly includes the following two strands. (1) Lexical Selection in SMT. In order to explore rich context information for lexical selection, some researchers propose trigger-based lexicon models to capture long-distance dependencies (Hasan et al., 2008; Mauser et al., 2009), and many more researchers build classifiers to select desirable translations during decoding (Chan et al., 2007; Carpuat and Wu, 2007; He et al., 2008; Liu et al., 2008). Along this line, Shen et al. (2009) introduce four new linguistic and contextual features for translation selection in SMT. Recently, we have witnessed an increasing efforts in exploiting document-level context information to improve lexical selection. Xiao et al. (2011) impose a hard constraint to guarantee 235 Topic Target-side Topical Items Source-side Contextual Words refugee UNHCR J¬(refugee) •¯?(office) ; (commissioner) ¯Ö(affair) p?(high-level) republic é†(union) ¬Ì(democracy) ?(government) žd=(Islam) ¥š(Central Africa) refugee J¬(refugee) ˆ£(return) 6l”¤(displaced) eˆ(repatriate) o(protect) Kosovo r÷Fæ(Metohija) ¸S(territory) ˆÅ(crisis) Û³(situation) l‘æ(Serbia) federal ÚI(republic) Hd.Å(Yugoslavia) ‰¢»(Kosovo) ?(government) Û(authority) military military * (observer) 1Ä(action) {I(USA) < (personnel) Üè(army) missile “”(defense) XÚ(system) {I(USA) u(launch) q(*) United States ¥I(China) F(Japan)  (Taiwan) ¯(military) NMD(National Missile Defense) system éÜI(United Nations) ïá(build) I(country) I[(country) &E(information) war Ô(war) |(∗) -.(world) uÄ(wage) ° (gulf) economy country uÐ¥(developing) uˆ(developed) š³(Africa) uÐ(development) ¥(China) development Œ±Y(sustainable) ²L(economy) r?(promote) ¬(society) ¯(situation) international ¬(society) |„(organization) ÜŠ(coorporation) I[(country) éÜI(United Nations) economic ¬(society) uÐ(development) O•(growth) I[(country) ¥z(globalization) trade uÐ(development) IS(international) -.(world) Ý](investment) :(point) cross-strait relation Taiwan ¥I(China) Œº(mainland) Û(authority) {I(USA) Óœ(compatriot) China `(say) {I(USA)  (Taiwan) K(principle) ü(*) relation uÐ(development) W(*) ¥(China) ü(*) I(country) cross-strait ü(*) 'X(relation)  (Taiwan) W(*) 6(exchange) issue )û(settlement) ?Ø(discuss) ¯K(issue) -‡(important)  (Taiwan) Table 4: Examples of topics, topical items and contextual words learned by CATM with Nz=25 and Ws=12. Chinese words that do not have direct English translations are denoted with ”*”. Here “q” and “|” are Chinese quantifiers for missile and war, respectively; “ü” and “W” together means cross-starit. the document-level translation consistency. Ture et al. (2012) soften this consistency constraint by integrating three counting features into decoder. Also relevant is the work of Xiong et al.(2013), who use three different models to capture lexical cohesion for document-level SMT. (2) SMT with Topic Models. In this strand, Zhao and Xing (2006, 2007) first present a bilingual topical admixture formalism for word alignment in SMT. Tam et al. (2007) and Ruiz et al. (2012) apply topic model into language model adaptation. Su et al. (2012) conduct translation model adaptation with monolingual topic information. Gong et al. (2010) and Xiao et al. (2012) introduce topic-based similarity models to improve SMT system. Axelrod et al. (2012) build topic-specific translation models from the TED corpus and select topic-relevant data from the UN corpus to improve coverage. Eidelman et al. (2012) incorporate topic-specific lexical weights into translation model. Hewavitharana et al. (2013) propose an incremental topic based translation model adaptation approach that satisfies the causality constraint imposed by spoken conversations. Hasler et al. (2014) present a new bilingual variant of LDA to compute topic-adapted, probabilistic phrase translation features. They also use a topic model to learn latent distributional representations of different context levels of a phrase pair (Hasler et al., 2014b). In the studies mentioned above, those by Zhao and Xing (2006), Zhao and Xing (2007), Hasler et al. (2014a), and Hasler et al. (2014b) are most related to our work. However, they all perform dynamic translation model adaptation with topic models. Significantly different from them, we propose a new topic model that exploits both local contextual words and global topics for lexical selection. To the best of our knowledge, this is first attempt to capture correlations between local words and global topics for better lexical selection at the model level. 7 Conclusion and Future Work This paper has presented a novel context-aware topic model for lexical selection in SMT. Jointly modeling local contexts, global topics and their correlations in a unified framework, our model provides an effective way to capture context information at different levels for better lexical selection in SMT. Experiment results not only demonstrate the effectiveness of the proposed topic model, but also show that lexical selection benefits from correlation modeling. In the future, we want to extend our model from the word level to the phrase level. We also plan to 236 improve our model with monolingual corpora. Acknowledgments The authors were supported by National Natural Science Foundation of China (Grant Nos 61303082 and 61403269), Natural Science Foundation of Jiangsu Province (Grant No. BK20140355), National 863 program of China (No. 2015AA011808), Research Fund for the Doctoral Program of Higher Education of China (No. 20120121120046), the Special and Major Subject Project of the Industrial Science and Technology in Fujian Province 2013 (Grant No. 2013HZ0004-1), and 2014 Key Project of Anhui Science and Technology Bureau (Grant No. 1301021018). We also thank the anonymous reviewers for their insightful comments. References Amittai Axelrod, Xiaodong He, Li Deng, Alex Acero, and Mei-Yuh Hwang. 2012. New methods and evaluation experiments on translating TED talks in the IWSLT benchmark. In Proc. of ICASSP 2012, pages 4945-4648. Rafael E. Banchs and Marta R. Costa-juss`a. 2011. A Semantic Feature for Statistical Machine Translation. In Proc. of SSSST-5 2011, pages 126-134. David M. Blei. 2003. Latent Dirichlet Allocation. Journal of Machine Learning, pages 993-1022. Marine Carpuat and Dekai Wu. 2007. Improving Statistical Machine Translation Using Word Sense Disambiguation. In Proc. of EMNLP 2007, pages 61-72. Yee Seng Chan, Hwee Tou Ng, and David Chiang. 2007. Word Sense Disambiguation Improves Statistical Machine Translation. In Proc. of ACL 2007, pages 33-40. David Chiang. 2007. Hierarchical Phrase-Based Translation. Computational Linguistics, pages 201-228. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability. In Proc. of ACL 2011, short papers, pages 176-181. George Doddington. 2002. Translation Quality Using Ngram Cooccurrence Statistics. In Proc. of HLT 2002, 138-145. Vladimir Eidelman, Jordan Boyd-Graber, and Philip Resnik. 2012. Topic Models for Dynamic Translation Model Adaptation. In Proc. of ACL 2012, Short Papers, pages 115-119. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable Inference and Training of ContextRich Syntactic Translation Models. In Proc. of ACL 2006, pages 961-968. Zhengxian Gong and Guodong Zhou. 2010. Improve SMT with Source-side Topic-Document Distributions. In Proc. of SUMMIT 2010. Thomas L. Griffiths and Mark Steyvers. 2004. Finding Scientific Topics. In Proc. of the National Academy of Sciences 2004. Xianpei Han and Le Sun. 2012. An Entity-Topic Model for Entity Linking. In Proc. of EMNLP 2012, pages 105-115. Saˇsa Hasan, Juri Ganitkevitch, Hermann Ney, and Jes´us Andr´es-Ferrer 2008. Triplet Lexicon Models for Statistical Machine Translation. In Proc. of EMNLP 2008, pages 372-381. Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. 2014. Dynamic Topic Adaptation for Phrase-based MT. In Proc. of EACL 2014, pages 328337. Eva Hasler, Phil Blunsom, Philipp Koehn, and Barry Haddow. 2014. Dynamic Topic Adaptation for SMT using Distributional Profiles. In Proc. of WMT 2014, pages 445-456. Zhongjun He, Qun Liu, and Shouxun Lin. 2008. Improving Statistical Machine Translation using Lexicalized Rule Selection. In Proc. of COLING 2008, pages 321328. Sanjika Hewavitharana, Dennis Mehay, Sankaranarayanan Ananthakrishnan, and Prem Natarajan. 2013. Incremental Topic-based TM Adaptation for Conversational SLT. In Proc. of ACL 2013, Short Papers, pages 697-701. Saurabh S. Kataria, Krishnan S. Kumar, and Rajeev Rastogi. 2011. Entity Disambiguation with Hierarchical Topic Models. In Proc. of KDD 2011, pages 10371045. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-based Translation. In Proc. of NAACL-HLT 2003, pages 127-133. Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proc. of EMNLP 2004, pages 388-395. Qun Liu, Zhongjun He, Yang Liu, and Shouxun Lin. 2008. Maximum Entropy based Rule Selection Model for Syntax-based Statistical Machine Translation. In Proc. of EMNLP 2008, pages 89-97. Arne Mauser, Saˇsa Hasan, and Hermann Ney. 2009. Extending Statistical Machine Translation with Discriminative and Trigger-based Lexicon Models. In Proc. of EMNLP 2009, pages 210-218. 237 Franz Joseph Och and Hermann Ney. 2002. Discriminative Training and Maximum Entropy Models for Statistical Machine Translation. In Proc. of ACL 2002, pages 295-302. Franz Joseph Och and Hermann Ney. 2003. A Systematic Comparison of Various Statistical Alignment Models. Computational Linguistics, 2003(29), pages 1951. Franz Josef Och. 2003. Minimum Error Rate Training in Statistical Machine Translation. In Proc. of ACL 2003, pages 160-167. Franz Joseph Och and Hermann Ney. 2004. The Alignment Template Approach to Statistical Machine Translation. Computational Linguistics, 2004(30), pages 417-449. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2007. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proc. of ACL 2002, pages 311-318. Nick Ruiz and Marcello Federico. 2012. Topic Adaptation for Lecture Translation through Bilingual Latent Semantic Models. In Proc. of the Sixth Workshop on Statistical Machine Translation, pages 294-302. Libin Shen, Jinxi Xu, Bing Zhang, Spyros Matsoukas, and Ralph Weischedel. 2009. Effective Use of Linguistic and Contextual Information for Statistical Machine Translation. In Proc. of EMNLP 2009, pages 72-80. Andreas Stolcke. 2002. Srilm - An Extensible Language Modeling Toolkit. In Proc. of ICSLP 2002, pages 901904. Jinsong Su, Hua Wu, Haifeng Wang, Yidong Chen, Xiaodong Shi, Huailin Dong, and Qun Liu. 2012. Translation Model Adaptation for Statistical Machine Translation with Monolingual Topic Information. In Proc. of ACL 2012, pages 459-468. Yik-Cheung Tam, Ian R. Lane, and Tanja Schultz. 2007. Bilingual LSA-based adaptation for statistical machine translation. Machine Translation, 21(4), pages 187207. Ferhan Ture, DouglasW. Oard, and Philip Resnik. 2012. Encouraging Consistent Translation Choices. In Proc. of NAACL-HLT 2012, pages 417-426. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403. Tong Xiao, Jingbo Zhu, Shujie Yao, and Hao Zhang. 2011. Document-level Consistency Verification in Machine Translation. In Proc. of MT SUMMIT 2011, pages 131-138. Xinyan Xiao, Deyi Xiong, Min Zhang, Qun Liu, and Shouxun Lin. 2012. A Topic Similarity Model for Hierarchical Phrase-based Translation. In Proc. of ACL 2012, pages 750-758. Deyi Xiong, Qun Liu, and Shouxun Lin. 2006. Maximum Entropy Based Phrase Reordering Model for Statistical Machine Translation. In Proc. of ACL 2006, pages 521-528. Deyi Xiong, Guosheng Ben, Min Zhang, Yajuan L¨u, and Qun Liu. 2013. Modeling Lexical Cohesion for Document-Level Machine Translation. In Proc. of IJCAI 2013, pages 2183-2189. Deyi Xiong and Min Zhang. 2014. A Sense-Based Translation Model for Statistical Machine Translation. In Proc. of ACL 2014, pages 1459-1469. Bing Zhao and Eric P.Xing. 2006. BiTAM: Bilingual Topic AdMixture Models for Word Alignment. In Proc. of ACL/COLING 2006, pages 969-976. Bing Zhao and Eric P.Xing. 2007. HM-BiTAM: Bilingual Topic Exploration, Word Alignment, and Translation. In Proc. of NIPS 2007, pages 1-8. 238
2015
23
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 239–249, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning Answer-Entailing Structures for Machine Comprehension Mrinmaya Sachan1∗ Avinava Dubey1∗ Eric P. Xing1 Matthew Richardson2 1Carnegie Mellon University 2Microsoft Research 1{mrinmays, akdubey, epxing}@cs.cmu.edu [email protected] Abstract Understanding open-domain text is one of the primary challenges in NLP. Machine comprehension evaluates the system’s ability to understand text through a series of question-answering tasks on short pieces of text such that the correct answer can be found only in the given text. For this task, we posit that there is a hidden (latent) structure that explains the relation between the question, correct answer, and text. We call this the answer-entailing structure; given the structure, the correctness of the answer is evident. Since the structure is latent, it must be inferred. We present a unified max-margin framework that learns to find these hidden structures (given a corpus of question-answer pairs), and uses what it learns to answer machine comprehension questions on novel texts. We extend this framework to incorporate multi-task learning on the different subtasks that are required to perform machine comprehension. Evaluation on a publicly available dataset shows that our framework outperforms various IR and neuralnetwork baselines, achieving an overall accuracy of 67.8% (vs. 59.9%, the best previously-published result.) 1 Introduction Developing an ability to understand natural language is a long-standing goal in NLP and holds the promise of revolutionizing the way in which people interact with machines and retrieve information (e.g., for scientific endeavor). To evaluate this ability, Richardson et al. (2013) proposed the task of machine comprehension (MCTest), along with ∗*Work started while the first two authors were interns at Microsoft Research, Redmond. a dataset for evaluation. Machine comprehension evaluates a machine’s understanding by posing a series of reading comprehension questions and associated texts, where the answer to each question can be found only in its associated text. Solutions typically focus on some semantic interpretation of the text, possibly with some form of probabilistic or logical inference, in order to answer the questions. Despite significant recent interest (Burges, 2013; Weston et al., 2014; Weston et al., 2015), the problem remains unsolved. In this paper, we propose an approach for machine comprehension. Our approach learns latent answer-entailing structures that can help us answer questions about a text. The answer-entailing structures in our model are closely related to the inference procedure often used in various models for MT (Blunsom and Cohn, 2006), RTE (MacCartney et al., 2008), paraphrase (Yao et al., 2013b), QA (Yih et al., 2013), etc. and correspond to the best (latent) alignment between a hypothesis (formed from the question and a candidate answer) with appropriate snippets in the text that are required to answer the question. An example of such an answer-entailing structure is given in Figure 1. The key difference between the answerentailing structures considered here and the alignment structures considered in previous works is that we can align multiple sentences in the text to the hypothesis. The sentences in the text considered for alignment are not restricted to occur contiguously in the text. To allow such a discontiguous alignment, we make use of the document structure; in particular, we take help from rhetorical structure theory (Mann and Thompson, 1988) and event and entity coreference links across sentences. Modelling the inference procedure via answer-entailing structures is a crude yet effective and computationally inexpensive proxy to model the semantics needed for the problem. Learning these latent structures can also be bene239 Figure 1: The answer-entailing structure for an example from MCTest500 dataset. The question and answer candidate are combined to generate a hypothesis sentence. Then latent alignments are found between the hypothesis and the appropriate snippets in the text. The solid red lines show the word alignments from the hypothesis words to the passage words, the dashed black lines show auxiliary co-reference links in the text and the labelled dotted black arrows show the RST relation (elaboration) between the two sentences. Note that the two sentences do not have to be contiguous sentences in the text. We provide some more examples of answer-entailing structures in the supplementary. ficial as they can assist a human in verifying the correctness of the answer, eliminating the need to read a lengthy document. The overall model is trained in a max-margin fashion using a latent structural SVM (LSSVM) where the answer-entailing structures are latent. We also extend our LSSVM to multi-task settings using a top-level question-type classification. Many QA systems include a question classification component (Li and Roth, 2002; Zhang and Lee, 2003), which typically divides the questions into semantic categories based on the type of the question or answers expected. This helps the system impose some constraints on the plausible answers. Machine comprehension can benefit from such a pre-classification step, not only to constrain plausible answers, but also to allow the system to use different processing strategies for each category. Recently, Weston et al. (2015) defined a set of 20 sub-tasks in the machine comprehension setting, each referring to a specific aspect of language understanding and reasoning required to build a machine comprehension system. They include fact chaining, negation, temporal and spatial reasoning, simple induction, deduction and many more. We use this set to learn to classify questions into the various machine comprehension subtasks, and show that this task classification further improves our performance on MCTest. By using the multi-task setting, our learner is able to exploit the commonality among tasks where possible, while having the flexibility to learn taskspecific parameters where needed. To the best of our knowledge, this is the first use of multi-task learning in a structured prediction model for QA. We provide experimental validation for our model on a real-world dataset (Richardson et al., 2013) and achieve superior performance vs. a number of IR and neural network baselines. 2 The Problem Machine comprehension requires us to answer questions based on unstructured text. We treat this as selecting the best answer from a set of candidate answers. The candidate answers may be pre-defined, as is the case in multiple-choice question answering, or may be undefined but restricted (e.g., to yes, no, or any noun phrase in the text). Machine Comprehension as Textual Entailment: Let for each question qi ∈Q, ti be the unstructured text and Ai = {ai1, . . . , aim} be the set of candidate answers to the question. We cast the machine comprehension task as a textual entailment task by converting each questionanswer candidate pair (qi, ai,j) into a hypothesis statement hij. For example, the question “What did Alyssa eat at the restaurant?” and answer candidate “Catfish” in Figure 1 can be combined to achieve a hypothesis “Alyssa ate Catfish at the restaurant”. We use the question matching/rewriting rules described in Cucerzan and Agichtein (2005) to achieve this transformation. For each question qi, the machine comprehension task reduces to picking the hypothesis ˆhi that has the highest likelihood of being entailed by the text among the set of hypotheses hi = {hi1, . . . , him} generated for that question. Let h∗ i ∈hi be the correct hypothesis. Now let us define the latent answer-entailing structures. 3 Latent Answer-Entailing Structures The latent answer-entailing structures help the model in providing evidence for the correct hy240 pothesis. We consider the quality of a one-toone word alignment from a hypothesis to snippets in the text as a proxy for the evidence. Hypothesis words are aligned to a unique text word in the text or an empty word. For example, in Figure 1, all words but “at” are aligned to a word in the text. The word “at” can be assumed to be aligned to an empty word and it has no effect on the model. Learning these alignment edges typically helps a model decompose the input and output structures into semantic constituents and determine which constituents should be compared to each other. These alignments can then be used to generate more effective features. The alignment depends on two things: (a) snippets in the text to be aligned to the hypothesis and (b) word alignment from the hypothesis to the snippets. We explore three variants of the snippets in the text to be aligned to the hypothesis. The choice of these snippets composed with the word alignment is the resulting hidden structure called an answer-entailing structure. 1. Sentence Alignment: The simplest variant is to find a single sentence in the text that best aligns to the hypothesis. This is the structure considered in a majority of previous works in RTE (MacCartney et al., 2008) and QA (Yih et al., 2013) as they only reason on single sentence length texts. 2. Subset Alignment: Here we find a subset of sentences from the text (instead of just one sentence) that best aligns with the hypothesis. 3. Subset+ Alignment: This is the same as above except that the best subset is an ordered set. 4 Method A natural solution is to treat MCTest as a structured prediction problem of ranking the hypotheses hi such that the correct hypothesis is at the top of this ranking. This induces a constraint on the ranking structure that the correct hypothesis is ranked above the other competing hypotheses. For each text ti and hypotheses set hi, let Yi be the set of possible orderings of the hypotheses. Let y∗ i ∈Yi be a correct ranking (such that the correct hypothesis is at the top of this ranking). Let the set of possible answer-entailing structures for each text hypothesis pair (ti, hi) be denoted by Zi. For each text ti, with hypotheses set hi, an ordering of the hypotheses y ∈Yi, and hidden structure z ∈Zi. we define a scoring function Scorew(ti, hi, z, y) parameterized by a weight vector w such that we have the prediction rule: ( byi, bzi) = arg maxy∈Yi,z∈Zi Scorew(ti, hi, z, y). The learning task is to find w such that the predicted ordering byi is close to the optimal ordering y∗ i . Mathematically this can be written as minw 1 2∥w∥2 + C P i ∆(y∗ i , z∗ i , byi, bzi) where z∗ i = arg maxz∈Zi Scorew(ti, hi, z, y∗ i ) and ∆is the loss function between the predicted and the actual ranking and latent structure. We simplify the loss function and assume it to be independent of the hidden structure (∆(y∗ i , z∗ i , byi, bzi) = ∆(y∗ i , byi)) and use a linear scoring function: Scorew(ti, hi, z, y) = wT φ(ti, hi, z, y) where φ is a feature map dependent on the text ti, the hypothesis set hi, an ordering of answers y and a hidden structure z. We use a convex upper bound of the loss function (Yu and Joachims, 2009) to rewrite the objective: min w 1 2∥w∥2 −C X i wT φ(ti, hi, z∗ i , y∗ i ) (1) +C n X i=1 max y∈Yi,z∈Zi{wT φ(ti, hi, z, y) + ∆(y∗ i , y)} This problem can be solved using ConcaveConvex Programming (Yuille and Rangarajan, 2003) with the cutting plane algorithm for structural SVM (Finley and Joachims, 2008). We use phi partial order (Joachims, 2006; Dubey et al., 2009) which has been used in previous structural ranking literature to incorporate ranking structure in the feature vector φ: φ(ti, hi, z, y) = X j:hij̸=h∗ i cj(y)(ψ(ti, h∗ i , z∗ i ) −ψ(ti, hij, zj)) (2) where, cj(y) = 1 if h∗ i is above hij in the ranking y else −1. We use pair preference (Chakrabarti et al., 2008) as the ranking loss ∆(y∗ i , y). Here, ψ is the feature vector defined for a text, hypothesis and answer-entailing structure. Solution: We substitute the feature map definition (2) into Equation 1, leading to our LSSVM formulation. We consider the optimization as an alternating minimization problem where we alternate between getting the best zij and ψ for each texthypothesis pair given w (inference) and then solving for the weights w given ψ to obtain an optimal ordering of the hypothesis (learning). The step for solving for the weights is similar to rankSVM 241 (Joachims, 2002). Algorithm 1 describes our overall procedure Here, we use beam search for inferAlgorithm 1 Alternate Minimization for LSSVM 1: Initialize w 2: repeat 3: zij = arg maxz wT ψ(ti, hij, z) ∀i, j 4: Compute ψ for each i, j 5: Ci = ∅∀i 6: repeat 7: for i = 1, . . . , n do 8: r(y) = wT φ(ti, hi, z, y) + ∆(y∗ i , y) −wT φ(ti, hi, z∗ i , y∗ i ) 9: byi = arg maxy∈Yi r(y) 10: ξi = max{0, maxy∈Ui r(y)} 11: if r( byi) > ξi + ϵ then 12: Ci = Ci ∪byi Solve : min w,ξ 1 2∥w∥2 + C X i ξi ∀i, ∀y ∈Ci : wT φ(ti, hi, z∗ i , y∗ i ) ≥wT φ(ti, hi, z, y) + ∆(y∗ i , y) −ξi 13: until no change in any Ci 14: until Convergence ring the latent structure zij in step 3. Also, note that in step 3, when the answer-entailing structures are “Subset” or “Subset+”, we can always get a higher score by considering a larger subset of sentences. To discourage this, we add a penalty on the score proportional to the size of the subset. Multi-task Latent Structured Learning: Machine comprehension is a complex task which often requires us to interpret questions, the kind of answers they seek as well as the kinds of inference required to solve them. Many approaches in QA (Moldovan et al., 2003; Ferrucci, 2012) solve this by having a top-level classifier that categorizes the complex task into a variety of sub-tasks. The subtasks can correspond to various categories of questions that can be asked or various facets of text understanding that are required to do well at machine comprehension in its entirety.It is well known that learning a sub-task together with other related subtasks leads to a better solution for each sub-task. Hence, we consider learning classifications of the sub-tasks and then using multi-task learning. We extend our LSSVM to multi-task settings. Let S be the number of sub-tasks. We assume that the predictor w for each subtask s is partitioned into two parts: a parameter w0 that is globally shared across each subtasks and a parameter vs that is locally used to provide for the variations within the particular subtask: w = w0 + vs. Mathematically we define the scoring function for text ti, hypothesis set hi of the subtask s to be Scorew0,v,s(ti, hi, z, y) = (w0 + vs)T φ(ti, hi, z, y). The objective in this case is min w0,v λ2∥w0∥2 + λ1 S S X s=1 ∥vs∥2 (3) S X s=1 n X i=1 max y∈Yi,z∈Zi{(w0 + vs)T φ(ti, hi, z, y) + ∆(y∗ i , y)} −C X i (w0 + vs)T φ(t, hi, z∗ i , y∗ i ) Now, we extend a trick that Evgeniou and Pontil (2004) used on linear SVM to reformulate this problem into an objective that looks like (1). Such reformulation will help in using algorithm 1 to solve the multi-task problem as well. Lets define a new feature map Φs, one for each sub-task s using the old feature map φ as: Φs(ti, hi, z, y) = (φ(ti, hi, z, y) µ , 0, . . . , 0 | {z } s−1 , φ(ti, hi, z, y), 0, . . . , 0 | {z } S−s ) where µ = Sλ2 λ1 and the 0 denotes the zero vector of the same size as φ. Also define our new predictor as w = (√µw0, v1, . . . , vS). Using this formulation we can show that wT Φs(ti, hi, z, y) = (w0 + vs)T φ(ti, hi, z, y) and ∥w∥2 = P s ∥vs∥2 + µ∥w0∥2. Hence, if we now define the objective (1) but use the new feature map and w then we will get back our multitask objective (3). Thus we can use the same setup as before for multi-task learning after appropriately changing the feature map. We will explore a few definitions of sub-tasks in our experiments. Features: Recall that our features had the form ψ(t, h, z) where the hypothesis h was itself formed from a question q and answer candidate a. Given an answer-entailing structure z, we induce the following features based on word level similarity of aligned words: (a) Limited word-level surface-form matching and (b) Semantic word form matching: Word similarity for synonymy using SENNA word vectors (Collobert et al., 2011), 242 “Antonymy” ‘Class-Inclusion’ or ‘Is-A’ relations using Wordnet (Fellbaum, 1998). We compute additional features of the aforementioned kinds to match named entities and events. We also add features for matching local neighborhood in the aligned structure: features for matching bigrams, trigrams, dependencies, semantic roles, predicateargument structure as well as features for matching global structure: a tree kernel for matching syntactic representations of entire sentences using Srivastava and Hovy (2013). The local and global features can use the RST and coreference links enabling inference across sentences. For instance, in the example shown in figure 1, the coreference link connecting the two “restaurant” words brings the snippets “Alyssa enjoyed the” and “had a special on catfish” closer making these features more effective. The answer-entailing structures should be intuitively similar to the question but also the answer. Hence, we add features that are the product of features for the text-question match and text-answer match. String edit Features: In addition to looking for features on exact word/phrase match, we also add features using two paraphrase databases ParaPara (Chan et al., 2011) and DIRT (Lin and Pantel, 2001). The ParaPara database contains strings of the form string1 →string2 like “total lack of” → “lack of”, “is one of” →“among”, etc. Similarly, the DIRT database contains paraphrases of the form “If X decreases Y then X reduces Y”, “If X causes Y then X affects Y”, etc. Whenever we have a substring in the text can be transformed into another using these two databases, we keep match features for the substring with a higher score (according to w) and ignore the other substring. The sentences with discourse relations are related to each other by means of substitution, ellipsis, conjunction and lexical cohesion, etc (Mann and Thompson, 1988) and can help us answer certain kinds of questions (Jansen et al., 2014). As an example, the “cause” relation between sentences in the text can often give cues that can help us answer “why” or “how” questions. Hence, we add additional features - conjunction of the RST label and the question word - to our feature vector. Similarly, the entity and event co-reference relations can allows the system to reason about repeating entities or events through all the sentences they get mentioned in. Thus, we add additional features of the aforementioned types by replacing entity mentions with their first mentions. Subset+ Features: We add an additional set of features which match the first sentence in the ordered set to the question and the last sentence in the ordered set to the answer. This helps in the case when a certain portion of the text is targeted by the question but then it must be used in combination with another sentence to answer the question. For instance, in Figure 1, sentence 2 mentions the target of the question but the answer can only be given when in combination with sentence 1. Negation We empirically found that one key limitation in our formulation is its inability to handle negation (both in questions and text). Negation is especially hurtful to our model as it not only results in poor performance on questions that require us to reason with negated facts, it provides our model with a wrong signal (facts usually align well with their negated versions). We use a simple heuristic to overcome the negation problem. We detect negation (either in the hypothesis or a sentence in the text snippet aligned to it) using a small set of manually defined rules that test for presence of words such as “not”, “n’t”, etc. Then, we flip the partial order - i.e. the correct hypothesis is now ranked below the other competing hypotheses. For inference at test time, we also invert the prediction rule i.e. we predict the hypothesis (answer) that has the least score under the model. 5 Experiments Datasets: We use two datasets for our evaluation. (1) First is the MCTest-500 dataset 1, a freely available set of 500 stories (split into 300 train, 50 dev and 150 test) and associated questions (Richardson et al., 2013). The stories are fictional so the answers can be found only in the story itself. The stories and questions are carefully limited, thereby minimizing the world knowledge required for this task. Yet, the task is challenging for most modern NLP systems. Each story in MCTest has four multiple choice questions, each with four answer choices. Each question has only one correct answer. Furthermore, questions are also annotated with ‘single’ and ‘multiple’ labels. The questions annotated ‘single’ only require one sentence in the story to answer them. For ‘multiple’ questions it should not be possible to find the answer to the question in any individual sentence of the passage. In a sense, the ‘multiple’ questions are 1http://research.microsoft.com/mct 243 harder than the ‘single’ questions as they typically require complex lexical analysis, some inference and some form of limited reasoning. Cucerzanconverted questions can also be downloaded from the MCTest website. (2) The second dataset is a synthetic dataset released under the bAbI project2 (Weston et al., 2015). The dataset presents a set of 20 ‘tasks’, each testing a different aspect of text understanding and reasoning in the QA setting, and hence can be used to test and compare capabilities of learning models in a fine-grained manner. For each ‘task’, 1000 questions are used for training and 1000 for testing. The ‘tasks’ refer to question categories such as questions requiring reasoning over single/two/three supporting facts or two/three arg. relations, yes/no questions, counting questions, etc. Candidate answers are not provided but the answers are typically constrained to a small set: either yes or no or entities already appearing in the text, etc. We write simple rules to convert the question and answer candidate pairs to hypotheses. 3 Baselines: We have five baselines. (1) The first three baselines are inspired from Richardson et al. (2013). The first baseline (called SW) uses a sliding window and matches a bag of words constructed from the question and hypothesized answer to the text. (2) Since this ignores long range dependencies, the second baseline (called SW+D) accounts for intra-word distances as well. As far as we know, SW+D is the best previously published result on this task.4 (3) The third baseline (called RTE) uses textual entailment to answer MCTest questions. For this baseline, MCTest is again re-casted as an RTE task by converting each question-answer pair into a statement (using Cucerzan and Agichtein (2005)) and then selecting the answer whose statement has the highest likelihood of being entailed by the 2https://research.facebook.com/researchers/1543934539189348 3Note that the bAbI dataset is artificial and not meant for open-domain machine comprehension. It is a toy dataset generated from a simulated world. Due to its restrictive nature, we do not use it directly in evaluating our method vs. other open-domain machine comprehension methods. However, it provides benefit in identifying interesting subtasks of machine comprehension. As will be seen, we are able to leverage the dataset both to improve our multi-task learning algorithm, as well as to analyze the strengths and weaknesses of our model. 4We also construct two additional baselines (LSTM and QUANTA) for comparison in this paper both of which achieve superior performance to SW+D. story. 5 (4) The fourth baseline (called LSTM) is taken from Weston et al. (2015). The baseline uses LSTMs (Hochreiter and Schmidhuber, 1997) to accomplish the task. LSTMs have recently achieved state-of-the-art results in a variety of tasks due to their ability to model longterm context information as opposed to other neural networks based techniques. (5) The fifth baseline (called QANTA)6 is taken from Iyyer et al. (2014). QANTA too uses a recursive neural network for question answering. Task Classification for MultiTask Learning: We consider three alternative task classifications for our experiments. First, we look at question classification. We use a simple question classification based on the question word (what, why, what, etc.). We call this QClassification. Next, we also use a question/answer classification7 from Li and Roth (2002). This classifies questions into different semantic classes based on the possible semantic types of the answers sought. We call this QAClassification. Finally, we also learn a classifier for the 20 tasks in the Machine Comprehension gamut described in Weston et al. (2015). The classification algorithm (called TaskClassification) was built on the bAbI training set. It is essentially a Naive-Bayes classifier and uses only simple unigram and bigram features for the question and answer. The tasks typically correspond to different strategies when looking for an answer in the machine comprehension setting. In our experiments we will see that learning these strategies is better than learning the question answer classification which is in turn better than learning the question classification. Results: We compare multiple variants of our LSSVM8 where we consider a variety of answerentailing structures and our modification for negation and multi-task LSSVM, where we consider three kinds of task classification strategies against the baselines on the MCTest dataset. We consider two evaluation metrics: accuracy (proportion of questions correctly answered) and NDCG4 5The BIUTEE system (Stern and Dagan, 2012) available under the Excitement Open Platform http://hltfbk.github.io/Excitement-Open-Platform/ was used for recognizing textual entailment. 6http://cs.umd.edu/ miyyer/qblearn/ 7http://cogcomp.cs.illinois.edu/Data/QA/QC/ 8We tune the SVM regularization parameter C and the penalty factor on the subset size on the development set. We use a beam of size 5 in our experiments. We use Stanford CoreNLP and the HILDA parser (Feng and Hirst, 2014) for linguistic preprocessing. 244 69.85 59.45 61 63.24 66.15 64.83 67.65 67.99 67.83 40 50 60 70 80 Single Multiple All Percentage Accuracy 0.869 0.82 0.83 0.857 0.863 0.861 0.867 0.869 0.868 0.65 0.7 0.75 0.8 0.85 0.9 Single Multiple All Subset+/Negation Task Classification Subset+/Negation QAClassification Subset+/Negation QClassification Subset+/Negation Subset+ Subset Sentence QANTA LSTM RTE SW+D SW NDCG Figure 2: Comparison of variations of our method against several baselines on the MCTest-500 dataset. The figure shows two statistics, accuracy (on the left) and NDCG4 (on the right) on the test set of MCTest-500. All differences between the baselines and LSSVMs, the improvement due to negation and the improvements due to multi-task learning are significant (p < 0.01) using the two-tailed paired T-test. The exact numbers are available in the supplementary. (J¨arvelin and Kek¨al¨ainen, 2002). Unlike classification accuracy which evaluates if the prediction is correct or not, NDCG4, being a measure of ranking quality, evaluates the position of the correct answer in our predicted ranking. Figure 2 describes the comparison on MCTest. We can observe that all the LSSVM models have a better performance than all the five baselines (including LSTMs and RNNs which are state-ofthe-art for many other NLP tasks) on both metrics. Very interestingly, LSSVMs have a considerable improvement over the baselines for “multiple” questions. We posit that this is because of our answer-entailing structure alignment strategy which is a weak proxy to the deep semantic inference procedure required for machine comprehension. The RTE baseline achieves the best performance on the “single” questions. This is perhaps because the RTE community has almost entirely focused on single sentence text hypothesis pairs for a long time. However, RTE fares pretty poorly on the “multiple” questions indicating that of-the-shelf RTE systems cannot perform inference across large texts. Figure 2 also compares the performance of LSSVM variants when various answer-entailing structures are considered. Here we observe a clear benefit of using the alignment to the best subset structure over alignment to best sentence structure. We furthermore see improvements when the best subset alignment structure is augmented with the subset+ features. We can observe that the negation heuristic also helps, especially for “single” questions (majority of negation cases in the MCTest dataset are for the “single” questions). It is also interesting to see that the multi-task learners show a substantial boost over the single task SSVM. Also, it can be observed that the multi-task learner greatly benefits if we can learn a separation between the various strategies needed to learn an overarching list of subtasks required to solve the machine comprehension task. 9 The multi-task method (TaskClassification) which uses the Weston style categorization does better 9Note that this is despite the fact that the classifier in not learned on the MCTest dataset but the bAbI detaset! This hints at the fact that the task classification proposed in Weston et al. (2015) is more general and broadly also makes sense for other machine comprehension settings such as MCTest. 245 than the multi-task method (QAClassification) that learns the question answer classification. QAClassification in turn performs better than multi-task method (QClassification) that learns the question classification only. 6 Strengths and Weaknesses A good question to be asked is how good is structure alignment as a proxy to the semantics of the problem? In this section, we attempt to tease out the strengths and limitations of such a structure alignment approach for machine comprehension. To do so, we evaluate our methods on various tasks in the bAbl dataset.For the bAbI dataset, we add additional features inspired from the “task” distinction to handle specific “tasks”. In our experiments, we observed a similar general pattern of improvement of LSSVM over the baselines as well as the improvement due to multitask learning. Again task classification helped the multi-task learner the most and the QA classification helped more than the QClassification. It is interesting here to look at the performance within the sub-tasks. Negation improved the performance for three sub-tasks, namely, the tasks of modelling “yes/no questions”, “simple negations” and “indefinite knowledge” (the “Indefinite Knowledge” sub-task tests the ability to model statements that describe possibilities rather than certainties). Each of these sub-tasks contain a significant number of negation cases. Our models do especially well on questions requiring reasoning over one and two supporting facts, two arg. relations, indefinite knowledge, basic and compound coreference and conjunction. Our models achieve lower accuracy better than the baselines on two sub-tasks, namely “path finding” and “agent motivations”. Our model along with the baselines do not do too well on the “counting” sub-task, although we get slightly better scores. The “counting” sub-task (which asks about the number of objects with a certain property) requires the inference to have an ability to perform simple counting operations. The “path finding” sub-task requires the inference to reason about the spatial path between locations (e.g. Pittsburgh is located on the west of New York). The “agents motivations” sub-task asks questions such as ‘why an agent performs a certain action’. As inference is cheaply modelled via alignment structure, we lack the ability to deeply reason about facts or numbers. This is an important challenge for future work. 7 Related Work The field of QA is quite rich. Most QA evaluations such as TREC have typically focused on short factoid questions. The solutions proposed have ranged from various IR based approaches (Mittal and Mittal, 2011) that treat this as a problem of retrieval from existing knowledge bases and perform some shallow inference to NLP approaches that learn a similarity between the question and a set of candidate answers (Yih et al., 2013). A majority of these approaches do not focus on doing any deeper inference. However, the task of machine comprehension requires an ability to perform inference over paragraph length texts to seek the answer. This is challenging for most IR and NLP techniques. In this paper, we presented a strategy for learning answer-entailing structures that helped us perform inference over much longer texts by treating this as a structured input-output problem. The approach of treating a problem as one of mapping structured inputs to structured outputs is common across many NLP applications. Examples include word or phrase alignment for bitexts in MT (Blunsom and Cohn, 2006), text-hypothesis alignment in RTE (Sammons et al., 2009; MacCartney et al., 2008; Yao et al., 2013a; Sultan et al., 2014), question-answer alignment in QA (Berant et al., 2013; Yih et al., 2013; Yao and Van Durme, 2014), etc. Again all of these approaches align local parts of the input to local parts of the output. In this work, we extended the word alignment formalism to align multiple sentences in the text to the hypothesis. We also incorporated the document structure (rhetorical structures (Mann and Thompson, 1988)) and co-reference to help us perform inference over longer documents. QA has had a long history of using pipeline models that extract a limited number of high-level features from induced representations of questionanswer pairs, and then built a classifier using some labelled corpora. On the other hand we learnt these structures and performed machine comprehension jointly through a unified max-margin framework. We note that there exist some recent models such as Yih et al. (2013) that do model QA by automatically defining some kind of alignment between the question and answer snippets and use a similar structured input-output model. However, they are limited to single sentence answers. 246 Another advantage of our approach is its simple and elegant extension to multi-task settings. There has been a rich vein of work in multi-task learning for SVMs in the ML community. Evgeniou and Pontil (2004) proposed a multi-task SVM formulation assuming that the multi-task predictor w factorizes as the sum of a shared and a taskspecific component. We used the same idea to propose a multi-task variant of Latent Structured SVMs. This allows us to use the single task SVM in the multi-task setting with a different feature mapping. This is much simpler than other competing approaches such as Zhu et al. (2011) proposed in the literature for multi-task LSSVM. 8 Conclusion In this paper, we addressed the problem of machine comprehension which tests language understanding through multiple choice question answering tasks. We posed the task as an extension to RTE. Then, we proposed a solution by learning latent alignment structures between texts and the hypotheses in the equivalent RTE setting. The task requires solving a variety of sub-tasks so we extended our technique to a multi-task setting. Our technique showed empirical improvements over various IR and neural network baselines. The latent structures while effective are cheap proxies to the reasoning and language understanding required for this task and have their own limitations. We also discuss strengths and limitations of our model in a more fine-grained analysis. In the future, we plan to use logic-like semantic representations of texts, questions and answers and explore approaches to perform structured inference over richer semantic representations. Acknowledgments The authors would like to thank the anonymous reviewers, along with Sujay Jauhar and Snigdha Chaturvedi for their valuable comments and suggestions to improve the quality of the paper. References [Berant et al.2013] Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In EMNLP, pages 1533–1544. ACL. [Blunsom and Cohn2006] Phil Blunsom and Trevor Cohn. 2006. Discriminative word alignment with conditional random fields. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 65–72. Association for Computational Linguistics. [Burges2013] Christopher JC Burges. 2013. Towards the machine comprehension of text: An essay. Technical report, Microsoft Research Technical Report MSR-TR-2013-125, 2013, pdf. [Chakrabarti et al.2008] Soumen Chakrabarti, Rajiv Khanna, Uma Sawant, and Chiru Bhattacharyya. 2008. Structured learning for non-smooth ranking losses. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 88–96. [Chan et al.2011] Tsz Ping Chan, Chris CallisonBurch, and Benjamin Van Durme. 2011. Reranking bilingually extracted paraphrases using monolingual distributional similarity. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 33–42. [Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. [Cucerzan and Agichtein2005] S. Cucerzan and E. Agichtein. 2005. Factoid question answering over unstructured and structured content on the web. In Proceedings of TREC 2005. [Dubey et al.2009] Avinava Dubey, Jinesh Machchhar, Chiranjib Bhattacharyya, and Soumen Chakrabarti. 2009. Conditional models for non-smooth ranking loss functions. In ICDM, pages 129–138. [Evgeniou and Pontil2004] Theodoros Evgeniou and Massimiliano Pontil. 2004. Regularized multi–task learning. In Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109–117. [Fellbaum1998] Christiane Fellbaum. 1998. WordNet: An Electronic Lexical Database. Bradford Books. [Feng and Hirst2014] Vanessa Wei Feng and Graeme Hirst. 2014. A linear-time bottom-up discourse parser with constraints and post-editing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 511–521. [Ferrucci2012] David A Ferrucci. 2012. Introduction to this is watson. IBM Journal of Research and Development, 56(3.4):1–1. [Finley and Joachims2008] T. Finley and T. Joachims. 2008. Training structural SVMs when exact inference is intractable. In International Conference on Machine Learning (ICML), pages 304–311. 247 [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735–1780. [Iyyer et al.2014] Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Empirical Methods in Natural Language Processing. [Jansen et al.2014] Peter Jansen, Mihai Surdeanu, and Peter Clark. 2014. Discourse complements lexical semantics for non-factoid answer reranking. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 977–986. [J¨arvelin and Kek¨al¨ainen2002] Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. [Joachims2002] Thorsten Joachims. 2002. Optimizing search engines using clickthrough data. In Proceedings of the eighth ACM SIGKDD international conference on Knowledge discovery and data mining, pages 133–142. ACM. [Joachims2006] T. Joachims. 2006. Training linear SVMs in linear time. In ACM SIGKDD International Conference On Knowledge Discovery and Data Mining (KDD), pages 217–226. [Li and Roth2002] Xin Li and Dan Roth. 2002. Learning question classifiers. In Proceedings of the 19th international conference on Computational linguistics-Volume 1, pages 1–7. [Lin and Pantel2001] Dekang Lin and Patrick Pantel. 2001. Dirt@ sbt@ discovery of inference rules from text. In Proceedings of the seventh ACM SIGKDD international conference on Knowledge discovery and data mining, pages 323–328. [MacCartney et al.2008] Bill MacCartney, Michel Galley, and Christopher D Manning. 2008. A phrasebased alignment model for natural language inference. In Proceedings of the conference on empirical methods in natural language processing, pages 802– 811. [Mann and Thompson1988] William C Mann and Sandra A Thompson. 1988. {Rhetorical Structure Theory: Toward a functional theory of text organisation}. Text, 3(8):234–281. [Mittal and Mittal2011] Sparsh Mittal and Ankush Mittal. 2011. Versatile question answering systems: seeing in synthesis. International Journal of Intelligent Information and Database Systems, 5(2):119– 142. [Moldovan et al.2003] Dan Moldovan, Marius Pas¸ca, Sanda Harabagiu, and Mihai Surdeanu. 2003. Performance issues and error analysis in an opendomain question answering system. ACM Transactions on Information Systems (TOIS), 21(2):133– 154. [Richardson et al.2013] Matthew Richardson, J.C. Christopher Burges, and Erin Renshaw. 2013. Mctest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 193–203. [Sammons et al.2009] M. Sammons, V. Vydiswaran, T. Vieira, N. Johri, M. Chang, D. Goldwasser, V. Srikumar, G. Kundu, Y. Tu, K. Small, J. Rule, Q. Do, and D. Roth. 2009. Relation alignment for textual entailment recognition. In TAC. [Srivastava and Hovy2013] Shashank Srivastava and Dirk Hovy. 2013. A walk-based semantically enriched tree kernel over distributed word representations. In Empirical Methods in Natural Language Processing, pages 1411–1416. [Stern and Dagan2012] Asher Stern and Ido Dagan. 2012. Biutee: A modular open-source system for recognizing textual entailment. In Proceedings of the ACL 2012 System Demonstrations, pages 73–78. [Sultan et al.2014] Arafat Md Sultan, Steven Bethard, and Tamara Sumner. 2014. Back to basics for monolingual alignment: Exploiting word similarity and contextual evidence. Transactions of the Association of Computational Linguistics – Volume 2, Issue 1, pages 219–230. [Weston et al.2014] Jason Weston, Sumit Chopra, and Antoine Bordes. 2014. Memory networks. arXiv preprint arXiv:1410.3916. [Weston et al.2015] Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks. [Yao and Van Durme2014] Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 956–966. Association for Computational Linguistics. [Yao et al.2013a] Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013a. A lightweight and high performance monolingual word aligner. In ACL (2), pages 702–707. [Yao et al.2013b] Xuchen Yao, Benjamin Van Durme, Chris Callison-Burch, and Peter Clark. 2013b. Semi-markov phrase-based monolingual alignment. In Proceedings of EMNLP. [Yih et al.2013] Wentau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. 248 [Yu and Joachims2009] Chun-Nam Yu and T. Joachims. 2009. Learning structural svms with latent variables. In International Conference on Machine Learning (ICML). [Yuille and Rangarajan2003] A. L. Yuille and Anand Rangarajan. 2003. The concave-convex procedure. Neural Comput. [Zhang and Lee2003] Dell Zhang and Wee Sun Lee. 2003. Question classification using support vector machines. In Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pages 26–32. ACM. [Zhu et al.2011] Jun Zhu, Ning Chen, and Eric P Xing. 2011. Infinite latent svm for classification and multi-task learning. In Advances in neural information processing systems, pages 1620–1628. 249
2015
24
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 250–259, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning Continuous Word Embedding with Metadata for Question Retrieval in Community Question Answering Guangyou Zhou1, Tingting He1, Jun Zhao2, and Po Hu1 1 School of Computer, Central China Normal University, Wuhan 430079, China 2 National Laboratory of Pattern Recognition, CASIA, Beijing 100190, China {gyzhou,tthe,phu}@mail.ccnu.edu.cn [email protected] Abstract Community question answering (cQA) has become an important issue due to the popularity of cQA archives on the web. This paper is concerned with the problem of question retrieval. Question retrieval in cQA archives aims to find the existing questions that are semantically equivalent or relevant to the queried questions. However, the lexical gap problem brings about new challenge for question retrieval in cQA. In this paper, we propose to learn continuous word embeddings with metadata of category information within cQA pages for question retrieval. To deal with the variable size of word embedding vectors, we employ the framework of fisher kernel to aggregated them into the fixedlength vectors. Experimental results on large-scale real world cQA data set show that our approach can significantly outperform state-of-the-art translation models and topic-based models for question retrieval in cQA. 1 Introduction Over the past few years, a large amount of usergenerated content have become an important information resource on the web. These include the traditional Frequently Asked Questions (FAQ) archives and the emerging community question answering (cQA) services, such as Yahoo! Answers1, Live QnA2, and Baidu Zhidao3. The content in these web sites is usually organized as questions and lists of answers associated with metadata like user chosen categories to questions and askers’ awards to the best answers. This data made 1http://answers.yahoo.com/ 2http://qna.live.com/ 3http://zhidao.baidu.com/ cQA archives valuable resources for various tasks like question-answering (Jeon et al., 2005; Xue et al., 2008) and knowledge mining (Adamic et al., 2008), etc. One fundamental task for reusing content in cQA is finding similar questions for queried questions, as questions are the keys to accessing the knowledge in cQA. Then the best answers of these similar questions will be used to answer the queried questions. Many studies have been done along this line (Jeon et al., 2005; Xue et al., 2008; Duan et al., 2008; Lee et al., 2008; Bernhard and Gurevych, 2009; Cao et al., 2010; Zhou et al., 2011; Singh, 2012; Zhang et al., 2014a). One big challenge for question retrieval in cQA is the lexical gap between the queried questions and the existing questions in the archives. Lexical gap means that the queried questions may contain words that are different from, but related to, the words in the existing questions. For example shown in (Zhang et al., 2014a), we find that for a queried question “how do I get knots out of my cats fur?”, there are good answers under an existing question “how can I remove a tangle in my cat’s fur?” in Yahoo! Answers. Although the two questions share few words in common, they have very similar meanings, it is hard for traditional retrieval models (e.g., BM25 (Robertson et al., 1994)) to determine their similarity. This lexical gap has become a major barricade preventing traditional IR models (e.g., BM25) from retrieving similar questions in cQA. To address the lexical gap problem in cQA, previous work in the literature can be divided into two groups. The first group is the translation models, which leverage the question-answer pairs to learn the semantically related words to improve traditional IR models (Jeon et al., 2005; Xue et al., 2008; Zhou et al., 2011). The basic assumption is that question-answer pairs are “parallel texts” and relationship of words (or phrases) can be established through word-to-word (or phrase-to-phrase) 250 translation probabilities (Jeon et al., 2005; Xue et al., 2008; Zhou et al., 2011). Experimental results show that translation models obtain stateof-the-art performance for question retrieval in cQA. However, questions and answers are far from “parallel” in practice, questions and answers are highly asymmetric on the information they contain (Zhang et al., 2014a). The second group is the topic-based models (Cai et al., 2011; Ji et al., 2012), which learn the latent topics aligned across the question-answer pairs to alleviate the lexical gap problem, with the assumption that a question and its paired answers share the same topic distribution. However, questions and answers are heterogeneous in many aspects, they do not share the same topic distribution in practice. Inspired by the recent success of continuous space word representations in capturing the semantic similarities in various natural language processing tasks, we propose to incorporate an embedding of words in a continuous space for question representations. Due to the ability of word embeddings, we firstly transform words in a question into continuous vector representations by looking up tables. These word embeddings are learned in advance using a continuous skip-gram model (Mikolov et al., 2013), or other continuous word representation learning methods. Once the words are embedded in a continuous space, one can view a question as a Bag-of-Embedded-Words (BoEW). Then, the variable-cardinality BoEW will be aggregated into a fixed-length vector by using the Fisher kernel (FK) framework of (Clinchant and Perronnin, 2013; Sanchez et al., 2013). Through the two steps, the proposed approach can map a question into a length invariable compact vector, which can be efficiently and effectively for large-scale question retrieval task in cQA. We test the proposed approach on large-scale Yahoo! Answers data and Baidu Zhidao data. Yahoo! Answers and Baidu Zhidao represent the largest and most popular cQA archives in English and Chinese, respectively. We conduct both quantitative and qualitative evaluations. Experimental results show that our approach can significantly outperform state-of-the-art translation models and topic-based models for question retrieval in cQA. Our contribution in this paper are three-fold: (1) we represent a question as a bag-of-embeddedwords (BoEW) in a continuous space; (2) we introduce a novel method to aggregate the variablecardinality BoEW into a fixed-length vector by using the FK. The FK is just one possible way to subsequently transform this bag representation into a fixed-length vector which is more amenable to large-scale processing; (3) an empirical verification of the efficacy of the proposed framework on large-scale English and Chinese cQA data. The rest of this paper is organized as follows. Section 2 summarizes the related work. Section 3 describes our proposed framework for question retrieval. Section 4 reports the experimental results. Finally, we conclude the paper in Section 5. 2 Related Work 2.1 Question Retrieval in cQA Significant research efforts have been conducted over the years in attempt to improve question retrieval in cQA (Jeon et al., 2005; Xue et al., 2008; Lee et al., 2008; Duan et al., 2008; Bernhard and Gurevych, 2009; Cao et al., 2010; Zhou et al., 2011; Singh, 2012; Zhang et al., 2014a). Most of these works focus on finding similar questions for the user queried questions. The major challenge for question retrieval in cQA is the lexical gap problem. Jeon et al. (2005) proposed a wordbased translation model for automatically fixing the lexical gap problem. Xue et al. (2008) proposed a word-based translation language model for question retrieval. Lee et al. (2008) tried to further improve the translation probabilities based on question-answer pairs by selecting the most important terms to build compact translation models. Bernhard and Gurevych (2009) proposed to use as a parallel training data set the definitions and glosses provided for the same term by different lexical semantic resources. In order to improve the word-based translation model with some contextual information, Riezler et al. (2007) and Zhou et al. (2011) proposed a phrase-based translation model for question and answer retrieval. The phrase-based translation model can capture some contextual information in modeling the translation of phrases as a whole, thus the more accurate translations can better improve the retrieval performance. Singh (2012) addressed the lexical gap issues by extending the lexical word-based translation model to incorporate semantic information (entities). In contrast to the works described above that assume question-answer pairs are “parallel text”, our paper deals with the lexical gap by learning con251 tinuous word embeddings in capturing the similarities without any assumptions, which is much more reasonable in practice. Besides, some other studies model the semantic relationship between questions and answers with deep linguistic analysis (Duan et al., 2008; Wang et al., 2009; Wang et al., 2010; Ji et al., 2012; Zhang et al., 2014a) or a learning to rank strategy (Surdeanu et al., 2008; Carmel et al., 2014). Recently, Cao et al. (2010) and Zhou et al. (2013) exploited the category metadata within cQA pages to further improve the performance. On the contrary, we focus on the representation learning for questions, with a different solution with those previous works. 2.2 Word Embedding Learning Representation of words as continuous vectors has attracted increasing attention in the area of natural language processing (NLP). Recently, a series of works applied deep learning techniques to learn high-quality word representations. Bengio et al. (2003) proposed a probabilistic neural network language model (NNLM) for word representations. Furthermore, Mikolov et al. (2013) proposed efficient neural network models for learning word representations, including the continuous skip-gram model and the continuous bag-ofword model (CBOW), both of which are unsupervised models learned from large-scale text corpora. Besides, there are also a large number of works addressing the task of learning word representations (Huang et al., 2012; Maas et al., 2011; Turian et al., 2010). Nevertheless, since most the existing works learned word representations mainly based on the word co-occurrence information, the obtained word embeddings cannot capture the relationship between two syntactically or semantically similar words if either of them yields very little context information. On the other hand, even though amount of context could be noisy or biased such that they cannot reflect the inherent relationship between words and further mislead the training process. Most recently, Yu et al. (2014) used semantic prior knowledge to improve word representations. Xu et al. (2014) used the knowledge graph to advance the learning of word embeddings. In contrast to all the aforementioned works, in this paper, we present a general method to leverage the metadata of category information within cQA pages to further improve the word embedding representations. To our knowledge, it is the first work to learn word embeddings with metadata on cQA data set. 3 Our Approach In this Section, we describe the proposed approach: learning continuous word embedding with metadata for question retrieval in cQA. The proposed framework consists of two steps: (1) word embedding learning step: given a cQA data collection, questions are treated as the basic units. For each word in a question, we firstly transform it to a continuous word vector through the looking up tables. Once the word embeddings are learned, each question is represented by a variable-cardinality word embedding vector (also called BoEW); (2) fisher vector generation step: which uses a generative model in the FK framework to generate fisher vectors (FVs) by aggregating the BoEWs for all the questions. Question retrieval can be performed through calculating the similarity between the FVs of a queried question and an existing question in the archive. From the framework, we can see that although the word embedding learning computations and generative model estimation are time consuming, they can run only once in advance. Meanwhile, the computational requirements of FV generation and similarity calculation are limited. Hence, the proposed framework can efficiently achieve the largescale question retrieval task. 3.1 Word Embedding Learning In this paper, we consider a context-aware predicting model, more specifically, the Skip-gram model (Mikolov et al., 2013) for learning word embeddings, since it is much more efficient as well as memory-saving than other approaches.4 Skipgram is recently proposed for learning word representations using a neural network model, whose underlying idea is that similar words should have similar contexts. In the Skip-gram model (see Figure 1), a sliding window is employed on the input text stream to generate the training data, and l indicates the context window size to be 2l + 1. In each slide window, the model aims to use the central word wk as input to predict the context words. Let Md×N denote the learned embedding matrix, 4Note that although we use the skip-gram model as an example to illustrate our approach, the similar framework can be developed on the basis of any other word embedding models. 252 … … … … word embedding of Figure 1: The continuous skip-gram model. where N is the vocabulary size and d is the dimension of word embeddings. Each column of M represents the embedding of a word. Let wk is first mapped to its embedding ewk by selecting the corresponding column vector of M. The probability of its context word wk+j is then computed using a log-linear softmax function: p(wk+j|wk; θ) = exp(eT wk+jewk) PN w=1 exp(eTwewk) (1) where θ are the parameters we should learned, k = 1 · · · d, and j ∈[−l, l]. Then, the log-likelihood over the entire training data can be computed as: J(θ) = X (wk,wk+j) logp(wk+j|wk; θ) (2) To calculate the prediction errors for back propagation, we need to compute the derivative of p(wk+j|wk; θ), whose computation cost is proportional to the vocabulary size N. As N is often very large, it is difficult to directly compute the derivative. To deal this problem, Mikolov et al. (2013) proposed a simple negative sampling method, which generates r noise samples for each input word to estimate the target word, in which r is a very small number compared with N. Therefore, the training time yields linear scale to the number of noise samples and it becomes independent of the vocabulary size. Suppose the frequency of word w is u(w), then the probability of sampling w is usually set to p(w) ∝ u(w)3/4 (Mikolov et al., 2013). 3.2 Metadata Powered Model After briefing the skip-gram model, we introduce how we equip it with the metadata information. In cQA sites, there are several metadata, such as “category”,“voting” and so on. In this paper, we only consider the metadata of category information for word embedding learning. All questions in cQA are usually organized into a hierarchy of categories. When an user asks a question, the user typically required to choose a category label for the question from a predefined hierarchy of categories (Cao et al., 2010; Zhou et al., 2013). Previous work in the literature has demonstrated the effectiveness of the category information for question retrieval (Cao et al., 2010; Zhou et al., 2013). On the contrary, we argue that the category information benefits the word embedding learning in this work. The basic idea is that category information encodes the attributes or properties of words, from which we can group similar words according to their categories. Here, a word’s category is assigned based on the questions it appeared in. For example, a question “What are the security issues with java?” is under the category of “Computers & Internet →Security”, we simply put the category of a word java as “Computers & Internet → Security”. Then, we may require the representations of words that belong to the same category to be close to each other. Let s(wk, wi) be the similarity score between wk and wi. Under the above assumption, we use the following heuristic to constrain the similar scores: s(wk, wi) =  1 if c(wk) = c(wi) 0 otherwise (3) where c(wk) denotes the category of wk. If the central word wk shares the same category with the word wi, their similarity score will become 1, otherwise, we set to 0. Then we encode the category information using a regularization function Ec: Ec = N X k=1 N X i=1 s(wk, wi)d(wk, wi) (4) where d(wk, wi) is the distance for the words in the embedding space and s(wk, wi) serves as a weighting function. Again, for simplicity, we define d(wk, wi) as the Euclidean distance between wk and wi. We combine the skip-gram objective function and the regularization function derived from the metadata of category information, we get the following combined objective Jc that incorporates category information into the word representation learning process: Jc = J(θ) + βEc (5) where β is the combination coefficient. Our goal is to maximize the combined objective Jc, which 253 … … … … … Figure 2: The continuous skip-gram model with metadata of category information, called M-NET. can be optimized using back propagation neural networks. We call this model as metadata powered model (see Figure 2), and denote it by M-NET for easy of reference. In the implementation, we optimize the regularization function derived from the metadata of category information along with the training process of the skip-gram model. During the procedure of learning word representations from the context words in the sliding window, if the central word wk hits the category information, the corresponding optimization process of the metadata powered regularization function will be activated. Therefore, we maximize the weighted Euclidean distance between the representation of the central word and that of its similar words according to the objective function in Equation (5). 3.3 Fisher Vector Generation Once the word embeddings are learned, questions can be represented by variable length sets of word embedding vectors, which can be viewed as BoEWs. Semantic level similarities between queried questions and the existing questions represented by BoEWs can be captured more accurately than previous bag-of-words (BoW) methods. However, since BoEWs are variable-size sets of word embeddings and most of the index methods in information retrieval field are not suitable for this kinds of issues, BoEWs cannot be directly used for large-scale question retrieval task. Given a cQA data collection Q = {qi, 1 ≤i ≤ |Q|}, where qi is the ith question and |Q| is the number of questions in the data collection. The ith question qi is composed by a sequence of words wi = {wij, 1 ≤j ≤Ni}, where Ni denotes the length of qi. Through looking up table (word embedding matrix) of M, the ith question qi can be represented by Ewi = {ewij, 1 ≤j ≤Ni}, where ewij is the word embedding of wij. According to the framework of FK (Clinchant and Perronnin, 2013; Sanchez et al., 2013; Zhang et al., 2014b), questions are modeled by a probability density function. In this work, we use Gaussian mixture model (GMM) to do it. We assume that the continuous word embedding Ewi for question qi have been generated by a “universal” (e.g., questionindependent) probability density function (pdf). As is a common practice, we choose this pdf to be a GMM since any continuous distribution can be approximated with arbitrary precision by a mixture of Gaussian. In what follows, the pdf is denoted uλ where λ = {θi, µi, Σi, i = 1 · · · K} is the set of parameters of the GMM. θi, µi and Σi denote respectively the mixture weight, mean vector and covariance matrix of Gaussian i. For computational reasons, we assume that the covariance matrices are diagonal and denote σ2 i the variance vector of Gaussian i, e.g., σ2 i = diag(P i). In real applications, the GMM is estimated offline with a set of continuous word embeddings extracted from a representative set of questions. The parameters λ are estimated through the optimization of a Maximum Likelihood (ML) criterion using the Expectation-Maximization (EM) algorithm. In the following, we follow the notations used in (Sanchez et al., 2013). Given uλ, one can characterize the question qi using the following score function: Gqi λ = ▽Ni λ loguλ(qi) (6) where Gqi λ is a vector whose size depends only on the number of parameters in λ. Assuming that the word embedding ewij is iid (a simplifying assumption), we get: Gqi λ = Ni X j=1 ▽λloguλ(ewij) (7) Following the literature (Sanchez et al., 2013), we propose to measure the similarity between two questions qi and qj using the FK: K(qi, qj) = GqT i λ F −1 λ Gqj λ (8) where Fλ is the Fisher Information Matrix (FIM) of uλ: Fλ = Eqi∼uλ  Gqi λ GqT i λ  (9) Since Fλ is symmetric and positive definite, F −1 λ can be transformed to LT λ Lλ based on the Cholesky decomposition. Hence, KFK(qi, qj) can rewritten as follows: KFK(qi, qj) = GqT i λ Gqj λ (10) 254 where Gqi λ = LλGqi λ = Lλ ▽λ loguλ(qi) (11) In (Sanchez et al., 2013), Gqi λ refers to as the Fisher Vector (FV) of qi. The dot product between FVs can be used to calculate the semantic similarities. Based on the specific probability density function, GMM, FV of qi is respect to the mean µ and standard deviation σ of all the mixed Gaussian distributions. Let γj(k) be the soft assignment of the jth word embedding ewij in qi to Guassian k (uk): γj(k) = p(k|ewij) θiuk(ewij) PK j=1 θkuk(ewij) (12) Mathematical derivations lead to: Gqi µ,k = 1 Ni √θi Ni X j=1 γj(k) hewij −µk σk i (13) Gqi σ,k = 1 Ni √2θi Ni X j=1 γj(k) h(ewij −µk)2 σ2 k −1 i The division by the vector σk should be understood as a term-by-term operation. The final gradient vector Gqi λ is the concatenation of the Gqi µ,k and Gqi σ,k vectors for k = 1 · · · K. Let d denote the dimensionality of the continuous word embeddings and K be the number of Gaussians. The final fisher vector Gqi λ is therefore 2Kd-dimensional. 4 Experiments In this section, we present the experiments to evaluate the performance of the proposed method for question retrieval. 4.1 Data Set and Evaluation Metrics We collect the data sets from Yahoo! Answers and Baidu Zhidao. Yahoo! Answers and Baidu Zhidao represent the largest and the most popular cQA archives in English and Chinese, respectively. More specifically, we utilized the resolved questions at Yahoo! Answers and Baidu Zhidao. The questions include 10 million items from Yahoo! Answers and 8 million items from Baidu Zhidao (also called retrieval data). Each resolved question consists of three fields: “title”, “description” and “answers”, as well as some metadata, such as “category”. For question retrieval, we use only the “title” field and “category” metadata. It #queries #candidate #relevant Yahoo data 1,000 13,000 2,671 Baidu data 1,000 8,000 2,104 Table 1: Statistics on the manually labeled data. is assumed that the titles of questions already provide enough semantic information for understanding users’ information needs (Duan et al., 2008). We develop two test sets, one for “Yahoo data”, and the other for “Baidu data”. In order to create the test sets, we collect some extra questions that have been posted more recently than the retrieval data, and randomly sample 1, 000 questions for Yahoo! Answers and Baidu Zhidao, respectively. We take those questions as queries. All questions are lowercased and stemmed. Stopwords5 are also removed. We separately index all data from Yahoo! Answers and Baidu Zhidao using an open source Lucene with the BM25 scoring function6. For each query from Yahoo! Answers and Baidu Zhidao, we retrieve the several candidate questions from the corresponding indexed data by using the BM25 ranking algorithm in Lucene. On average, each query from Yahoo! Answers has 13 candidate questions and the average number of candidate questions for Baidu Zhidao is 8. We recruit students to label the relevance of the candidate questions regarding to the queries. Specifically, for each type of language, we let three native students. Given a candidate question, a student is asked to label it with “relevant” or “irrelevant”. If a candidate question is considered semantically similar to the query, the student will label it as “relevant”; otherwise, the student will label it as “irrelevant”. As a result, each candidate question gets three labels and the majority of the label is taken as the final decision for a querycandidate pair. We randomly split each of the two labeled data sets into a validation set and a test set with a ration 1 : 3. The validation set is used for tuning parameters of different models, while the test set is used for evaluating how well the models ranked relevant candidates in contrast to irrelevant candidates. Table 1 presents the manually labeled data. Please note that rather than evaluate both retrieval and ranking capability of different meth5http://truereader.com/manuals/onix/stopwords1.html 6We use the BM25 implementation provided by Apache Lucene (http://lucene.apache.org/), using the default parameter setting (k1 = 1.2, b = 0.75) 255 ods like the existing work (Cao et al., 2010), we compare them in a ranking task. This may lose recall for some methods, but it can enable largescale evaluation. In order to evaluate the performance of different models, we employ Mean Average Precision (MAP), Mean Reciprocal Rank (MRR), RPrecision (R-Prec), and Precision at K (P@5) as evaluation measures. These measures are widely used in the literature for question retrieval in cQA (Cao et al., 2010). 4.2 Parameter Setting In our experiments, we train the word embeddings on another large-scale data set from cQA sites. For English, we train the word embeddings on the Yahoo! Webscope dataset7. For Chinese, we train the word embeddings on a data set with 1 billion web pages from Baidu Zhidao. These two data sets do not intersect with the above mentioned retrieval data. Little pre-processing is conducted for the training of word embeddings. The resulting text is tokenized using the Stanford tokenizer,8, and every word is converted to lowercase. Since the proposed framework has no limits in using which of the word embedding learning methods, we only consider the following two representative methods: Skip-gram (baseline) and M-NET. To train the word embedding using these two methods, we apply the same setting for their common parameters. Specifically, the count of negative samples r is set to 3; the context window size l is set to 5; each model is trained through 1 epoch; the learning rate is initialized as 0.025 and is set to decrease linearly so that it approached zero at the end of training. Besides, the combination weight β used in MNET also plays an important role in producing high quality word embedding. Overemphasizing the weight of the original objective of Skip-gram may result in weakened influence of metadata, while putting too large weight on metadata powered objective may hurt the generality of learned word embedding. Based on our experience, it is a better way to decode the objective combination weight of the Skip-gram model and metadata information based on the scale of their respective derivatives during optimization. Finally, we set β = 0.001 empirically. Note that if the parameter 7The Yahoo! Webscope dataset Yahoo answers comprehensive questions and answers version 1.0.2, available at http://reseach.yahoo.com/Academic Relations. 8http://nlp.stanford.edu/software/tokenizer.shtml is optimized on the validation set, the final performance can be further improved. For parameter K used in FV, we do an experiment on the validation data set to determine the best value among 1, 2, 4, · · · , 64 in terms of MAP. As a result, we set K = 16 in the experiments empirically as this setting yields the best performance. 4.3 Main Results In this subsection, we present the experimental results on the test sets of Yahoo data and Baidu data. We compare the baseline word embedding trained by Skip-gram against this trained by M-NET. The dimension of word embedding is set as 50,100 and 300. Since the motivation of this paper attempts to tackle the lexical gap problem for queried questions and questions in the archive, we also compare them with the two groups of methods which also address the lexical gap in the literature. The first group is the translation models: word-based translation model (Jeon et al., 2005), word-based translation language model (Xue et al., 2008), and phrase-based translation model (Zhou et al., 2011). We implement those three translation models based on the original papers and train those models with (question, best answer) pairs from the Yahoo! Webscope dataset Yahoo answers and the 1 billion web pages of Baidu Zhidao for English and Chinese, respectively. Training the translation models with different pairs (e.g., question-best answer, question-description, question-answer) may achieve inconsistent performance on Yahoo data and Baidu data, but its comparison and analysis are beyond the scope of this paper. The second group is the topic-based methods: unsupervised question-answer topic model (Ji et al., 2012) and supervised question-answer topic model (Zhang et al., 2014a). We re-implement these two topicbased models and tune the parameter settings on our data set. Besides, we also introduce a baseline language model (LM) (Zhai and Lafferty, 2001) for comparison. Table 2 shows the question retrieval performance by using different evaluation metrics. From this table, we can see that learning continuous word embedding representations (Skip-gram + FV, M-NET + FV) for question retrieval can outperform the translation-based approaches and topic-based approaches on all evaluation metrics. We conduct a statistical test (t-test), the results 256 Model dim Yahoo data Baidu data MAP MRR R-Prec P@5 MAP MRR R-Prec P@5 LM (baseline) 0.435 0.472 0.381 0.305 0.392 0.413 0.325 0.247 (Jeon et al., 2005) 0.463 0.495 0.396 0.332 0.414 0.428 0.341 0.256 (Xue et al., 2008) 0.518 0.560 0.423 0.346 0.431 0.435 0.352 0.264 (Zhou et al., 2011) 0.536 0.587 0.439 0.361 0.448 0.450 0.367 0.273 (Ji et al., 2012) 0.508 0.544 0.405 0.324 0.425 0.431 0.349 0.258 (Zhang et al., 2014a) 0.527 0.572 0.433 0.350 0.443 0.446 0.358 0.265 Skip-gram + FV 50 0.532 0.583 0.437 0.358 0.447 0.450 0.366 0.272 100 0.544 0.605† 0.440 0.363 0.454 0.457 0.373 0.274 300 0.550† 0.619† 0.444 0.365 0.460† 0.464† 0.374 0.277 M-NET + FV 50 0.548† 0.612† 0.441 0.363 0.459† 0.462† 0.374 0.276 100 0.562‡ 0.628‡ 0.452† 0.367‡ 0.468‡ 0.471 0.378† 0.280† 300 0.571‡ 0.643‡ 0.455‡ 0.374‡ 0.475‡ 0.477‡ 0.385‡ 0.283‡ Table 2: Evaluation results on Yahoo data and Baidu data, where dim denotes the dimension of the word embeddings. The bold formate indicates the best results for question retrieval. † indicates that the difference between the results of our proposed approach (Skip-gram + FV, M-NET + FV) and other methods are mildly significant with p < 0.08 under a t-test; ‡ indicates the comparisons are statistically significant with p < 0.05. show that the improvements between the proposed M-NET + FV and the two groups of compared methods (translation-based approaches and topic-based approaches) are statistically significant (p < 0.05), while the improvements between Skip-gram + FV and the translation-based approaches are mildly significant (p < 0.08). Moreover, the metadata of category information powered model (M-NET + FV) outperforms the baseline skip-gram model (Skip-gram + FV) and yields the largest improvements. These results can imply that the metadata powered word embedding is of higher quality than the baseline model with no metadata information regularization. Besides, we also note that setting higher dimension brings more improvements for question retrieval task. Translation-based methods significantly outperform LM, which demonstrate that matching questions with the semantically related translation words or phrases from question-answer pairs can effectively address the word lexical gap problem. Besides, we also note that phrase-based translation model is more effective because it captures some contextual information in modeling the translation of phrases as a whole. More precise translation can be determined for phrases than for words. Similar observation has also been found in the previous work (Zhou et al., 2011). On both data sets, topic-based models achieve comparable performance with the translationbased models and but they perform better than LM. The results demonstrate that learning the latent topics aligned across the question-answer pairs can be an alternative for bridging lexical gap problem for question retrieval. 5 Conclusion This paper proposes to learn continuous vector representations for question retrieval in cQA. We firstly introduce a new metadata powered word embedding method, called M-NET, to leverage the category information within cQA pages to obtain word representations. Once the words are embedded in a continuous space, we treat each question as a BoEW. Then, the variable size BoEWs are aggregated into fixed-length vectors by using FK. Finally, the dot product between FVs are used to calculate the semantic similarities for question retrieval. Experiments on large-scale real world cQA data demonstrate that the efficacy of the proposed approach. For the future work, we will explore how to incorporate more types of metadata information, such as the user ratings, like signals and Poll and Survey signals, into the learning process to obtain more powerful word representations. Acknowledgments This work was supported by the National Natural Science Foundation of China (No. 61303180, 257 No. 61272332 and 61402191), the Beijing Natural Science Foundation (No. 4144087), the Major Project of National Social Science Found (No. 12&2D223), the Fundamental Research Funds for the Central Universities (No. CCNU15ZD003), and also Sponsored by CCF-Tencent Open Research Fund. We thank the anonymous reviewers for their insightful comments. References Lada A. Adamic, Jun Zhang, Eytan Bakshy, and Mark S. Ackerman. 2008. Knowledge sharing and yahoo answers: Everyone knows something. In Proceedings of WWW, pages 665–674. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. J. Mach. Learn. Res., 3. Delphine Bernhard and Iryna Gurevych. 2009. Combining lexical semantic resources with question & answer archives for translation-based answer finding. In Proceedings of ACL-IJCNLP. Li Cai, Guangyou Zhou, Kang Liu, and Jun Zhao. 2011. Learning the latent topics for question retrieval in community qa. In Proceedings of IJCNLP, pages 273–281. Xin Cao, Gao Cong, Bin Cui, and Christian S. Jensen. 2010. A generalized framework of exploring category information for question retrieval in community question answer archives. In Proceedings of WWW, pages 201–210. David Carmel, Avihai Mejer, Yuval Pinter, and Idan Szpektor. 2014. Improving term weighting for community question answering search using syntactic analysis. In Proceedings of CIKM, pages 351–360. Stephane Clinchant and Florent Perronnin. 2013. Aggregating continuous word embeddings for information retrieval. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 100–109. Huizhong Duan, Yunbo Cao, Chin yew Lin, and Yong Yu. 2008. Searching questions by identifying question topic and question focus. In Proceedings of ACL. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL, pages 873–882. Jiwoon Jeon, W. Bruce Croft, and Joon Ho Lee. 2005. Finding similar questions in large question and answer archives. In Proceedings of CIKM. Zongcheng Ji, Fei Xu, Bin Wang, and Ben He. 2012. Question-answer topic model for question retrieval in community question answering. In Proceedings of CIKM, pages 2471–2474. Jung-Tae Lee, Sang-Bum Kim, Young-In Song, and Hae-Chang Rim. 2008. Bridging lexical gaps between queries and questions on large online q&a collections with compact translation models. In Proceedings of EMNLP, pages 410–418. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning word vectors for sentiment analysis. In Proceedings of ACL, pages 142–150. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proceedings of NIPS, pages 3111–3119. Stefan Riezler, Er Vasserman, Ioannis Tsochantaridis, Vibhu Mittal, and Yi Liu. 2007. Statistical machine translation for query expansion in answer retrieval. In Proceedings of ACL. S. Robertson, S. Walker, S. Jones, M. HancockBeaulieu, and M. Gatford. 1994. Okapi at trec-3. In Proceedings of TREC, pages 109–126. Jorge Sanchez, Florent Perronnin, Thomas Mensink, and Jakob J. Verbeek. 2013. Image classification with the fisher vector: Theory and practice. International Journal of Computer Vision, pages 222–245. A. Singh. 2012. Entity based q&a retrieval. In Proceedings of EMNLP, pages 1266–1277. M. Surdeanu, M. Ciaramita, and H. Zaragoza. 2008. Learning to rank answers on large online qa collections. In Proceedings of ACL, pages 719–727. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In Proceedings of ACL. Kai Wang, Zhaoyan Ming, and Tat-Seng Chua. 2009. A syntactic tree matching approach to finding similar questions in community-based qa services. In Proceedings of SIGIR, pages 187–194. B. Wang, X. Wang, C. Sun, B. Liu, and L. Sun. 2010. Modeling semantic relevance for question-answer pairs in web social communities. In ACL. Chang Xu, Yalong Bai, Jiang Bian, Bin Gao, Gang Wang, Xiaoguang Liu, and Tie-Yan Liu. 2014. Rcnet: A general framework for incorporating knowledge into word representations. In Proceedings of CIKM, pages 1219–1228. Xiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and answer archives. In Proceedings of SIGIR, pages 475–482. 258 Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of ACL, pages 545–550. Chengxiang Zhai and John Lafferty. 2001. A study of smoothing methods for language models applied to ad hoc information retrieval. In Proceedings of SIGIR, pages 334–342. Kai Zhang, Wei Wu, Haocheng Wu, Zhoujun Li, and Ming Zhou. 2014a. Question retrieval with high quality answers in community question answering. In Proceedings of CIKM, pages 371–380. Qi Zhang, Jihua Kang, Jin Qian, and Xuanjing Huang. 2014b. Continuous word embeddings for detecting local text reuses at the semantic level. In Proceedings of the 37th International ACM SIGIR Conference on Research & Development in Information Retrieval, SIGIR ’14, pages 797–806. Guangyou Zhou, Li Cai, Jun Zhao, and Kang Liu. 2011. Phrase-based translation model for question retrieval in community question answer archives. In Proceedings of ACL, pages 653–662. Guangyou Zhou, Yubo Chen, Daojian Zeng, and Jun Zhao. 2013. Towards faster and better retrieval models for question search. In Proceedings of CIKM, pages 2139–2148. 259
2015
25
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 260–269, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Question Answering over Freebase with Multi-Column Convolutional Neural Networks Li Dong†∗Furu Wei‡ Ming Zhou‡ Ke Xu† †SKLSDE Lab, Beihang University, Beijing, China ‡Microsoft Research, Beijing, China [email protected] {fuwei,mingzhou}@microsoft.com [email protected] Abstract Answering natural language questions over a knowledge base is an important and challenging task. Most of existing systems typically rely on hand-crafted features and rules to conduct question understanding and/or answer ranking. In this paper, we introduce multi-column convolutional neural networks (MCCNNs) to understand questions from three different aspects (namely, answer path, answer context, and answer type) and learn their distributed representations. Meanwhile, we jointly learn low-dimensional embeddings of entities and relations in the knowledge base. Question-answer pairs are used to train the model to rank candidate answers. We also leverage question paraphrases to train the column networks in a multi-task learning manner. We use FREEBASE as the knowledge base and conduct extensive experiments on the WEBQUESTIONS dataset. Experimental results show that our method achieves better or comparable performance compared with baseline systems. In addition, we develop a method to compute the salience scores of question words in different column networks. The results help us intuitively understand what MCCNNs learn. 1 Introduction Automatic question answering systems return the direct and exact answers to natural language questions. In recent years, the development of largescale knowledge bases, such as FREEBASE (Bollacker et al., 2008), provides a rich resource to answer open-domain questions. However, how ∗Contribution during internship at Microsoft Research. to understand questions and bridge the gap between natural languages and structured semantics of knowledge bases is still very challenging. Up to now, there are two mainstream methods for this task. The first one is based on semantic parsing (Berant et al., 2013; Berant and Liang, 2014) and the other relies on information extraction over the structured knowledge base (Yao and Van Durme, 2014; Bordes et al., 2014a; Bordes et al., 2014b). The semantic parsers learn to understand natural language questions by converting them into logical forms. Then, the parse results are used to generate structured queries to search knowledge bases and obtain the answers. Recent works mainly focus on using question-answer pairs, instead of annotated logical forms of questions, as weak training signals (Liang et al., 2011; Krishnamurthy and Mitchell, 2012) to reduce annotation costs. However, some of them still assume a fixed and pre-defined set of lexical triggers which limit their domains and scalability capability. In addition, they need to manually design features for semantic parsers. The second approach uses information extraction techniques for open question answering. These methods retrieve a set of candidate answers from the knowledge base, and the extract features for the question and these candidates to rank them. However, the method proposed by Yao and Van Durme (2014) relies on rules and dependency parse results to extract hand-crafted features for questions. Moreover, some methods (Bordes et al., 2014a; Bordes et al., 2014b) use the summation of question word embeddings to represent questions, which ignores word order information and cannot process complicated questions. In this paper, we introduce the multi-column convolutional neural networks (MCCNNs) to automatically analyze questions from multiple aspects. Specifically, the model shares the same word embeddings to represent question words. 260 MCCNNs use different column networks to extract answer types, relations, and context information from the input questions. The entities and relations in the knowledge base (namely FREEBASE in our experiments) are also represented as low-dimensional vectors. Then, a score layer is employed to rank candidate answers according to the representations of questions and candidate answers. The proposed information extraction based method utilizes question-answer pairs to automatically learn the model without relying on manually annotated logical forms and hand-crafted features. We also do not use any pre-defined lexical triggers and rules. In addition, the question paraphrases are also used to train networks and generalize for the unseen words in a multi-task learning manner. We have conducted extensive experiments on WEBQUESTIONS. Experimental results illustrate that our method outperforms several baseline systems. The contributions of this paper are three-fold: • We introduce multi-column convolutional neural networks for question understanding without relying on hand-crafted features and rules, and use question paraphrases to train the column networks and word vectors in a multi-task learning manner; • We jointly learn low-dimensional embeddings for the entities and relations in FREEBASE with question-answer pairs as supervision signals; • We conduct extensive experiments on the WEBQUESTIONS dataset, and provide some intuitive interpretations for MCCNNs by developing a method to detect salient question words in the different column networks. 2 Related Work The state-of-the-art methods for question answering over a knowledge base can be classified into two classes, i.e., semantic parsing based and information retrieval based. Semantic parsing based approaches aim at learning semantic parsers which parse natural language questions into logical forms and then query knowledge base to lookup answers. The most important step is mapping questions into predefined logical forms, such as combinatory categorial grammar (Cai and Yates, 2013) and dependencybased compositional semantics (Liang et al., 2011). Some semantic parsing based systems required manually annotated logical forms to train the parsers (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2010). These annotations are relatively expensive. So recent works (Liang et al., 2011; Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014; Bao et al., 2014; Reddy et al., 2014) mainly aimed at using weak supervision (question-answer pairs) to effectively train semantic parsers. These methods achieved comparable results without using logical forms annotated by experts. However, some methods relied on lexical triggers or manually defined features. On the other hand, information retrieval based systems retrieve a set of candidate answers and then conduct further analysis to obtain answers. Their main difference is how to select correct answers from the candidate set. Yao and Van Durme (2014) used rules to extract question features from dependency parse of questions, and used relations and properties in the retrieved topic graph as knowledge base features. Then, the production of these two kinds of features was fed into a logistic regression model to classify the question’s candidate answers into correct/wrong. In contrast, we do not use rules, dependency parse results, or hand-crafted features for question understanding. Some other works (Bordes et al., 2014a; Bordes et al., 2014b) learned low-dimensional vectors for question words and knowledge base constitutes, and used the sum of vectors to represent questions and candidate answers. However, simple vector addition ignores word order information and highorder n-grams. For example, the question representations of who killed A and who A killed are same in the vector addition model. We instead use multi-column convolutional neural networks which are more powerful to process complicated question patterns. Moreover, our multi-column network architecture distinguishes between information of answer type, answer path and answer context by learning multiple column networks, while the addition model mixes them together. Another line of related work is applying deep learning techniques for the question answering task. Grefenstette et al. (2014) proposed a deep architecture to learn a semantic parser from annotated logic forms of questions. Iyyer et al. (2014) introduced dependency-tree recursive neural networks for the quiz bowl game which asked players to answer an entity for a given paragraph. Yu et 261 al. (2014) proposed a bigram model based on convolutional neural networks to select answer sentences from text data. The model learned a similarity function between questions and answer sentences. Yih et al. (2014) used convolutional neural networks to answer single-relation questions on REVERB (Fader et al., 2011). However, the system worked on relation-entity triples instead of more structured knowledge bases. For instance, the question shown in Figure 1 is answered by using several triples in FREEBASE. Also, we can utilize richer information (such as entity types) in structured knowledge bases. 3 Setup Given a natural language question q = w1 . . . wn, we retrieve related entities and properties from FREEBASE and use them as the candidate answers Cq. Our goal is to score these candidates and predict answers. For instance, the correct output of the question when did Avatar release in UK is 2009-12-17. It should be noted that there may be several correct answers for a question. In order to train the model, we use question-answer pairs without annotated logic forms. We further describe the datasets used in our work as follows: WebQuestions This dataset (Berant et al., 2013) contains 3,778 training instances and 2,032 test instances. We further split the training instances into the training set and the development set by 80%/20%. The questions were collected by querying the Google Suggest API. A breadth-first search beginning with wh- was conducted. Then, answers were annotated in Amazon Mechanical Turk. All the answers can be found in FREEBASE. Freebase It is a large-scale knowledge base that consists of general facts (Bollacker et al., 2008). These facts are organized as subject-propertyobject triples. For example, the fact Avatar is directed by James Cameron is represented by (/m/0bth54, film.film.directed by, /m/03 gd) in RDF format. The preprocess method presented in (Bordes et al., 2014a) was used to make FREEBASE fit in memory. Specifically, we kept the triples where one of the entities appeared in the training/development set of WEBQUESTIONS or CLUEWEB extractions provided in (Lin et al., 2012), and removed the entities appearing less than five times. Then, we obtained 18M triples that contained 2.9M entities and 7k relation types. As described in (Bordes et al., 2014a), this preprocess method does not ease the task because WEBQUESTIONS only contains about 2k entities. WikiAnswers Fader et al. (2013) extracted the similar questions on WIKIANSWERS and used them as question paraphrases. There are 350,000 paraphrase clusters which contain about two million questions. They are used to generalize for unseen words and question patterns. 4 Methods The overview of our framework is shown in Figure 1. For instance, for the question when did Avatar release in UK, the related nodes of the entity Avatar are queried from FREEBASE. These related nodes are regarded as candidate answers (Cq). Then, for every candidate answer a, the model predicts a score S (q, a) to determine whether it is a correct answer or not. We use multi-column convolutional neural networks (MCCNNs) to learn representations of questions. The models share the same word embeddings, and have multiple columns of convolutional neural networks. The number of columns is set to three in our QA task. These columns are used to analyze different aspects of a question, i.e., answer path, answer context, and answer type. The vector representations learned by these columns are denoted as f1 (q) , f2 (q) , f3 (q). We also learn embeddings for the candidate answers appeared in FREEBASE. For every candidate answer a, we compute its vector representations and denote them as g1 (a) , g2 (a) , g3 (a). These three vectors correspond to the three aspects used in question understanding. Using these vector representations defined for questions and answers, we can compute the score for the question-answer pair (q, a). Specifically, the scoring function S (q, a) is defined as: S (q, a) = f1 (q)Tg1 (a) | {z } answer path + f2 (q)Tg2 (a) | {z } answer context + f3 (q)Tg3 (a) | {z } answer type (1) where fi (q) and gi (a) have the same dimension. As shown in Figure 1, the score layer computes scores and adds them together. 4.1 Candidate Generation The first step is to retrieve candidate answers from FREEBASE for a question. Questions should contain an identified entity that can be linked to the 262 when did Avatar release in UK <L> <R> Convolutional Layer Max-Pooling Layer Shared Word Representations Avatar m.0bth54 James Cameron m.03_gd film.film.directed_by type.object.type people.person film.producer type.object.type m.09w09jk film.film.release _date_s type.object.type film.film_region al_release_date United States of America m.09c7w0 film.film_regional_release _date.film_release_region film.film_regional_release _date.release_date 2009-12-18 datetime value_type m.0gdp17z film.film. release_date_s type.object.type film.film_region al_release_date United Kingdom m.07ssc film.film_regional_release _date.film_release_region film.film_regional_release _date.release_date 2009-12-17 datetime value_type Score Layer Score + + Dot Product Answer Path Answer Context Answer Type Figure 1: Overview for the question-answer pair (when did Avatar release in UK, 2009-12-17). Left: network architecture for question understanding. Right: embedding candidate answers. knowledge base. We use the Freebase Search API (Bollacker et al., 2008) to query named entities in a question. If there is not any named entity, noun phrases are queried. We use the top one entity in the ranked list returned by the API. This entity resolution method was also used in (Yao and Van Durme, 2014). Better methods can be developed, while it is not the focus of this paper. Then, all the 2-hops nodes of the linked entity are regarded as the candidate answers. We denote the candidate set for the question q as Cq. 4.2 MCCNNs for Question Understanding MCCNNs use multiple convolutional neural networks to learn different aspects of questions from shared input word embeddings. For every single column, the network structure presented in (Collobert et al., 2011) is used to tackle the variablelength questions. We present the model in the left part of Figure 1. Specifically, for the question q = w1 . . . wn, the lookup layer transforms every word into a vector wj = Wvu(wj), where Wv ∈Rdv×|V | is the word embedding matrix, u(wj) ∈{0, 1}|V | is the one-hot representation of wj, and |V | is the vocabulary size. The word embeddings are parameters, and are updated in the training process. Then, the convolutional layer computes representations of the words in sliding windows. For the i-th column of MCCNNs, the convolutional layer computes n vectors for question q. The jth vector is: x(i) j = h  W(i)h wT j−s . . . wT j . . . wT j+s iT + b(i)  (2) where (2s + 1) is the window size, W(i) ∈ Rdq×(2s+1)dv is the weight matrix of convolutional layer, b(i) ∈Rdq×1 is the bias vector, and h (·) is the nonlinearity function (such as softsign, tanh, and sigmoid). Paddings are used for left and right absent words. Finally, a max-pooling layer is followed to obtain the fixed-size vector representations of questions. The max-pooling layer in the i-th column of MCCNNs computes the representation of the question q via: fi (q) = max j=1,...,n{x(i) j } (3) where max{·} is an element-wise operator over vectors. 4.3 Embedding Candidate Answers Vector representations g1 (a) , g2 (a) , g3 (a) are learned for the candidate answer a. The vectors are employed to represent different aspects of a. The embedding methods are described as follows: Answer Path The answer path is the set of relations between the answer node and the entity asked in question. As shown in Figure 1, the 2-hops path between the entity Avatar and the correct answer is (film.film.release date s, 263 film.film regional release date.release date). The vector representation g1(a) is computed via g1(a) = 1 ∥up(a)∥1 Wpup(a), where ∥·∥1 is 1-norm, up(a) ∈R|R|×1 is a binary vector which represents the presence or absence of every relation in the answer path, Wp ∈Rdq×|R| is the parameter matrix, and |R| is the number of relations. In other words, the embeddings of relations that appear on the answer path are averaged. Answer Context The 1-hop entities and relations connected to the answer path are regarded as the answer context. It is used to deal with constraints in questions. For instance, as shown in Figure 1, the release date of Avatar in UK is asked, so it is not enough that only the triples on answer path are considered. With the help of context information, the release date in UK has a higher score than in USA. The context representation is g2(a) = 1 ∥uc(a)∥1 Wcuc(a), where Wc ∈Rdq×|C| is the parameter matrix, uc(a) ∈R|C|×1 is a binary vector expressing the presence or absence of context nodes, and |C| is the number of entities and relations which appear in answer context. Answer Type Type information in FREEBASE is an important clue to score candidate answers. As illustrated in Figure 1, the type of 2009-12-17 is datetime, and the type of James Cameron is people.person and film.producer. For the example question when did Avatar release in UK, the candidate answers whose types are datetime should be assigned with higher scores than others. The vector representation is defined as g3(a) = 1 ∥ut(a)∥1 Wtut(a), where Wt ∈Rdq×|T| is the matrix of type embeddings, ut(a) ∈R|T|×1 is a binary vector which indicates the presence or absence of answer types, and |T| is the number of types. In our implementation, we use the relation common.topic.notable types to query types. If a candidate answer is a property value, we instead use its value type (e.g., float, string, datetime). 4.4 Model Training For every correct answer a ∈Aq of the question q, we randomly sample k wrong answers a′ from the set of candidate answers Cq, and use them as negative instances to estimate parameters. To be more specific, the hinge loss is considered for pairs (q, a) and (q, a′): l q, a, a′ = m −S(q, a) + S(q, a′)  + (4) where S(·, ·) is the scoring function defined in Equation (1), m is the margin parameter employed to regularize the gap between two scores, and (z)+ = max{0, z}. The objective function is: min X q 1 |Aq| X a∈Aq X a′∈Rq l q, a, a′ (5) where |Aq| is the number of correct answers, and Rq ⊆Cq \ Aq is the set of k wrong answers. The back-propagation algorithm (Rumelhart et al., 1986) is used to train the model. It backpropagates errors from top to the other layers. Derivatives are calculated and gathered to update parameters. The AdaGrad algorithm (Duchi et al., 2011) is then employed to solve this non-convex optimization problem. Moreover, the max-norm regularization (Srebro and Shraibman, 2005; Srivastava et al., 2014) is used for the column vectors of parameter matrices. 4.5 Inference During the test, we retrieve all the candidate answers Cq for the question q. For every candidate ˆa, we compute its score S(q, ˆa). Then, the candidate answers with the highest scores are regarded as predicted results. Because there may be more than one correct answers for some questions, we need a criterion to determine the score threshold. Specifically, the following equation is used to determine outputs: ˆ Aq = {ˆa | ˆa ∈Cq and max a′∈Cq{S(q, a′)} −S(q, ˆa) < m} (6) where m is the margin defined in Equation (4). The candidates whose scores are not far from the best answer are regarded as predicted results. Some questions may have a large set of candidate answers. So we use a heuristic method to prune their candidate sets. To be more specific, if the number of candidates on the same answer path is greater than 200, we randomly keep 200 candidates for this path. Then, we score and rank all these generated candidate answers together. If one of the candidates on the pruned path is regarded as a predicted answer, we further score the other candidates that are pruned on this path and determine the final results. 264 4.6 Question Paraphrases for Multi-Task Learning We use the question paraphrases dataset WIKIANSWERS to generalize for words and question patterns which are unseen in the training set of question-answer pairs. The question understanding results of paraphrases should be same. Consequently, the representations of two paraphrases computed by the same column of MCCNNs should be similar. We use dot similarity to define the hinge loss lp (q1, q2, q3) as: lp (q1, q2, q3) = 3 X i=1  mp −fi (q1)Tfi (q2) + fi (q1)Tfi (q3)  + (7) where q1, q2 are questions in the same paraphrase cluster P, q3 is randomly sampled from another cluster, and mp is the margin. The objective function is defined as: min X P X q1,q2∈P X q3∈RP lp (q1, q2, q3) (8) where RP contains kp questions which are randomly sampled from other clusters. The same optimization algorithm described in Section 4.4 is used to update parameters. 5 Experiments In order to evaluate the model, we use the dataset WEBQUESTIONS (Section 3) to conduct experiments. Settings The development set is used to select hyper-parameters in the experiments. The nonlinearity function f = tanh is employed. The dimension of word vectors is set to 25. They are initialized by the pre-trained word embeddings provided in (Turian et al., 2010). The window size of MCCNNs is 5. The dimension of the pooling layers and the dimension of answer embeddings are set to 64. The parameters are initialized by the techniques described in (Bengio, 2012). The max value used for max-norm regularization is 3. The initial learning rate used in AdaGrad is set to 0.01. A mini-batch consists of 10 question-answer pairs, and every question-answer pair has k negative samples that are randomly sampled from its candidate set. The margin values in Equation (4) and Equation (7) is set to m = 0.5 and mp = 0.1. Method F1 P@1 (Berant et al., 2013) 31.4 (Berant and Liang, 2014) 39.9 (Bao et al., 2014) 37.5 (Yao and Van Durme, 2014) 33.0 (Bordes et al., 2014a) 39.2 40.4 (Bordes et al., 2014b) 29.7 31.3 MCCNN (our) 40.8 45.1 Table 1: Evaluation results on the test split of WEBQUESTIONS. 5.1 Experimental Results The evaluation metrics macro F1 score (Berant et al., 2013) and precision @ 1 (Bordes et al., 2014a) are reported. We use the official evaluation script provided by Berant et al. (2013) to compute the F1 score. Notably, the F1 score defined in (Yao and Van Durme, 2014) is slightly different from others (how to compute scores for the questions without predicted results). We instead use the original definition in experiments. As shown in Table 1, our method achieves better or comparable results than baseline methods on WEBQUESTIONS. To be more specific, the first three rows are semantic parsing based methods, and the other baselines are information extraction based methods. These approaches except (Bordes et al., 2014a; Bordes et al., 2014b) rely on handcrafted features and predefined rules. The results show that automatically question understanding can be as good as the models using manually designed features. Besides, our multi-column convolutional neural networks based model outperforms the methods that use the sum of word embeddings as question representations (Bordes et al., 2014a; Bordes et al., 2014b). 5.2 Model Analysis We also conduct ablation experiments to compare the results using different experiment settings. As shown in Table 2, the abbreviation w/o means removing a particular part from the model. We find that answer path information is most important among these three columns, and answer type information is more important than answer context information. The reason is that answer path and answer type are more direct clues for questions, but answer context is used to handle additional constraints in questions which are less common in the dataset. Moreover, we compare to the 265 Setting F1 P@1 all 40.8 45.1 w/o path 32.5 37.1 w/o type 37.7 40.9 w/o context 39.1 41.0 w/o multi-column 38.4 41.8 w/o paraphrase 40.0 43.9 1-hop 29.3 32.2 Table 2: Evaluation results of different settings on the test split of WEBQUESTIONS. w/o path/type/context: without using the specific column. w/o multi-column: tying parameters of multiple columns. w/o paraphrase: without using question paraphrases for training. 1-hop: using 1hop paths to generate candidate answers. model using single-column networks (w/o multicolumn), i.e., tying the parameters of different columns. The results indicate that using multiple columns to understand questions from different aspects improves the performance. Besides, we find that using question paraphrases in a multi-task learning manner contributes to the performance. In addition, we evaluate the results only using 1hop paths to generate candidate answers. Compared to using 2-hops paths, we find that the performance drops significantly. This indicates only using the nodes directly connected to the queried entity in FREEBASE cannot handle many questions. 5.3 Salient Words Detection In order to analyze the model, we detect salient words in questions. The salience score of a question word depends on how much the word affects the computation of question representation. In other words, if a word plays more important role in the model, its salience score should be larger. We compute several salience scores for a same word to illustrate its importance in different columns of networks. For the i-th column, the salience score of word wj in the question q = wn 1 is defined as: ei(wj) = fi (wn 1 ) −fi  wj−1 1 w′ jwn j+1  2 (9) where the word wj is replaced with w′ j, and ∥·∥2 denotes Euclidean norm. In practice, we replace wj with several stop words (such as is, to, and a), and then compute their average score. what type of car does weston drive what countries speak german as a first language who is the current leader of cuba today where is the microsoft located Answer Path Answer Type Answer Context Figure 2: Salient words detection results for questions. From left to right, the three bars of every word correspond to salience scores in answer path column, answer type column, and answer context column, respectively. The salience scores are normalized by the max values of different columns. As shown in Figure 2, we compute salience scores for several questions, and normalize them by the max values in different columns. We clearly see that these words play different roles in a question. The overall conclusion is that the wh- words (such as what, who and where) tend to be important for question understanding. Moreover, nouns dependent of the wh- words and verbs are important clues to obtain question representations. For instance, the figure demonstrates that the nouns type/country/leader and the verbs speak/located are salient in the columns of networks. These observations agree with previous works (Li and Roth, 2002). Some manually defined rules (Yao and Van Durme, 2014) used in the question answering task are also based on them. 5.4 Examples Question representations computed by different columns of MCCNNs are used to query their most similar neighbors. We use cosine similarity in experiments. This experiment demonstrates whether the model learns different aspects of questions. For example, if a column of networks is employed to analyze answer types, the answer types of nearest questions should be same as the query. As shown in Table 3, these three columns of table correspond to different columns of networks. To be more specific, the first column is used to process answer path. We find that the model learns different question patterns for the same 266 Column 1 (Answer Path) Column 2 (Answer Type) Column 3 (Answer Context) what to do in hollywood can this weekend what to do in midland tx this weekend what to do in cancun with family what to do at fairfield can what to see in downtown asheville nc what to see in toronto top 10 where be george washington originally from where be george washington carver from where be george bush from where be the thame river source where be the main headquarters of google in what town do ned kelly and he family grow up where do charle draw go to college where do kevin love go to college where do pauley perrette go to college where do kevin jame go to college where do charle draw go to high school where do draw bree go to college wikianswer who found collegehumor who found the roanoke settlement who own skywest who start mary kay who be the owner of kfc who own wikimedium foundation who be the leader of north korea today who be the leader of syrium now who be the leader of cuba 2012 who be the leader of france 2012 who be the current leader of cuba today who be the minority leader of the house of representative now who be judy garland father who be clint eastwood date who be emma stone father who be robin robert father who miley cyrus engage to who be chri cooley marry to what type of money do japanese use what kind of money do japanese use what type of money do jamaica use what type of currency do brazil use what type of money do you use in cuba what money do japanese use what be the two official language of paraguay what be the local language of israel what be the four official language of nigerium what be the official language of jamaica what be the dominant language of jamaica what be the official language of brazil now what be the timezone in vancouver what be my timezone in californium what be los angeles california time zone what be my timezone in oklahoma what be my timezone in louisiana what be the time zone in france Table 3: Using question representations obtained by different column networks to query the nearest neighbors. From left to right, the three columns are used to analyze information about answer path, answer type, and answer context, respectively. Lemmatization is used to better show question patterns. path. For instance, the vector representations of “who found/own/start *” and “who be the owner of *” obtained by the first column are similar. The second column is employed to extract answer type information from questions. The answer types of example questions in Table 3 are same, while they may ask different relations. The third column learns to embed question information into answer context. We find that the similar questions are clustered together by this column. 5.5 Error Analysis We investigate the predicted results on the development set, and show several error causes as follows. Candidate Generation Some entity mentions in questions are linked incorrectly, hence we cannot obtain the desired candidate answers. As described in (Yao and Van Durme, 2014), the Freebase Search API returned correct entities for 86.4% of questions in top one results. Because some questions use the abbreviation or a part of its mention to express an entity. For example, it is not trivial to link jfk to John F. Kennedy in the question “where did jfk and his wife live”. A better entity retrieval step should be developed for the open question answering scenario. Time-Aware Questions We need to compare date values for some time-aware questions. For instance, to answer the question “who is johnny cash’s first wife”, we have to know the order of several marriages by comparing the marriage date. Its correct response should contain only one entity (vivian liberto). However, our system additionally outputs june carter cash who is his second wife, because both the candidate answers are connected to johnny cash by the relation people.person.spouse s. In order to solve this issue, we need to define some ad-hoc operators used for comparisons or develop more advanced semantic representations. Ambiguous Questions Some questions are ambiguous to obtain their correct representations. For example, the question what has anna kendrick been in is used to ask what movies she has played in. This question does not have explicit clue words to indicate the meanings, so it is difficult to rank the candidates. Moreover, the question who is aidan quinn is employed to ask what his occupation is. It also lacks sufficient clues for question understanding, and using who is to ask occupation is rare in the training data. 6 Conclusion and Future Work This paper presents a method for question answering over FREEBASE using multi-column convolutional neural networks (MCCNNs). MCCNNs share the same word embeddings, and use multiple columns of convolutional neural networks to learn the representations of different aspects of questions. Accordingly, we use low-dimensional embeddings to represent multiple aspects of candidate answers, i.e., answer path, answer type, and answer context. We estimate the parameters from question-answer pairs, and use question paraphrases to train the columns of MCCNNs in a multi-task learning manner. Experimental results on WEBQUESTIONS show that our approach 267 achieves better or comparable performance comparing with baselines. There are several interesting directions that are worth exploring in the future. For instance, we are integrating more external knowledge source, such as CLUEWEB (Lin et al., 2012), to train MCCNNs in a multi-task learning manner. Furthermore, as our model is capable of detecting the most important words in a question, it would be interesting to use the results to mine effective question patterns. Acknowledgments This research was supported by NSFC (Grant No. 61421003) and the fund of the State Key Lab of Software Development Environment (Grant No. SKLSDE-2015ZX-05). References Junwei Bao, Nan Duan, Ming Zhou, and Tiejun Zhao. 2014. Knowledge-based question answering as machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 967– 976. Association for Computational Linguistics. Yoshua Bengio. 2012. Practical recommendations for gradient-based training of deep architectures. In Neural Networks: Tricks of the Trade, pages 437– 478. Jonathan Berant and Percy Liang. 2014. Semantic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1415–1425. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1533–1544. Association for Computational Linguistics. Kurt D. Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data, pages 1247–1250. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 615–620. Association for Computational Linguistics. Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014b. Open question answering with weakly supervised embedding models. In Machine Learning and Knowledge Discovery in Databases - European Conference, ECML PKDD 2014, Nancy, France, September 15-19, 2014. Proceedings, Part I, pages 165–180. Qingqing Cai and Alexander Yates. 2013. Large-scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 423–433. Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. JMLR, 12:2121–2159, July. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1535–1545, Stroudsburg, PA, USA. Association for Computational Linguistics. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1608–1618. Association for Computational Linguistics. Edward Grefenstette, Phil Blunsom, Nando de Freitas, and Moritz Karl Hermann, 2014. Proceedings of the ACL 2014 Workshop on Semantic Parsing, chapter A Deep Architecture for Semantic Parsing, pages 22– 27. Association for Computational Linguistics. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 633–644. Association for Computational Linguistics. Jayant Krishnamurthy and Tom M Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 754–765. Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic ccg grammars from logical form with higherorder unification. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1223–1233. Association for Computational Linguistics. 268 Tom Kwiatkowski, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1545–1556. Association for Computational Linguistics. Xin Li and Dan Roth. 2002. Learning question classifiers. In COLING, pages 1–7. P. Liang, M. I. Jordan, and D. Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590–599. Thomas Lin, Mausam, and Oren Etzioni. 2012. Entity linking at web scale. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, AKBCWEKEX ’12, pages 84–88, Stroudsburg, PA, USA. Association for Computational Linguistics. Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-scale semantic parsing without question-answer pairs. Transactions of the Association of Computational Linguistics – Volume 2, Issue 1, pages 377–392. D.E. Rumelhart, G.E. Hinton, and R.J. Williams. 1986. Learning representations by back-propagating errors. Nature, 323(6088):533–536. Nathan Srebro and Adi Shraibman. 2005. Rank, tracenorm and max-norm. In Proceedings of the 18th annual conference on Learning Theory, pages 545– 560. Springer-Verlag. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In ACL. Xuchen Yao and Benjamin Van Durme. 2014. Information extraction over structured data: Question answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 956–966. Association for Computational Linguistics. Xuchen Yao, Jonathan Berant, and Benjamin Van Durme, 2014. Proceedings of the ACL 2014 Workshop on Semantic Parsing, chapter Freebase QA: Information Extraction or Semantic Parsing?, pages 82–86. Association for Computational Linguistics. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 643–648. Association for Computational Linguistics. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep Learning for Answer Sentence Selection. In NIPS Deep Learning Workshop, December. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In In Proceedings of the 21st Conference on Uncertainty in AI, pages 658–666. 269
2015
26
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 270–280, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Hubness and Pollution: Delving into Cross-Space Mapping for Zero-Shot Learning Angeliki Lazaridou Georgiana Dinu Marco Baroni Center for Mind/Brain Sciences University of Trento {angeliki.lazaridou|georgiana.dinu|marco.baroni}@unitn.it Abstract Zero-shot methods in language, vision and other domains rely on a cross-space mapping function that projects vectors from the relevant feature space (e.g., visualfeature-based image representations) to a large semantic word space (induced in an unsupervised way from corpus data), where the entities of interest (e.g., objects images depict) are labeled with the words associated to the nearest neighbours of the mapped vectors. Zero-shot cross-space mapping methods hold great promise as a way to scale up annotation tasks well beyond the labels in the training data (e.g., recognizing objects that were never seen in training). However, the current performance of cross-space mapping functions is still quite low, so that the strategy is not yet usable in practical applications. In this paper, we explore some general properties, both theoretical and empirical, of the cross-space mapping function, and we build on them to propose better methods to estimate it. In this way, we attain large improvements over the state of the art, both in cross-linguistic (word translation) and cross-modal (image labeling) zero-shot experiments. 1 Introduction In many supervised problems, the parameters of a classification function are estimated on (x, y) pairs, where x is a vector representing a training instance in some feature space, and y is the label assigned to the instance. For example, in image labeling x contains visual features extracted from a picture and y is the name of the object depicted in the picture (Grauman and Leibe, 2011). Since each label is treated as an unanalyzed primitive, this approach requires ad-hoc annotation for each label of interest, and it will not scale up to challenges where the potential label set is vast (for example, bilingual dictionary induction, where the label set corresponds to the full vocabulary of the target language). Zero-shot methods (Palatucci et al., 2009) address the scalability problem by building on the observation that the labels of interest are often words (or longer linguistic expressions), which stand in a semantic similarity relation to each other. Moreover, distributional approaches allow us to estimate very large semantic word spaces in an efficient and unsupervised manner, using just unannotated text corpora as input (Turney and Pantel, 2010). Extensive evidence has shown that the similarity estimates obtained by representing words as vectors in such corpus-induced semantic spaces are extremely accurate (Baroni et al., 2014). Under the assumption that the domain of interest (e.g., objects in pictures, words in a source language) exhibits comparable similarity structure to that manifested in language, we can rephrase the learning task, from inducing multiple functions from the source feature space onto independent atomic labels, to that of estimating a single crossspace mapping function from vectors in the source feature space onto vectors for the corresponding word labels in distributional semantic space. The induced function can then also be applied to a data-point whose label was not used for training. The word corresponding to the nearest neighbour of the mapped vector in the latter space is used as the label of the data point. Zero-shot learning using distributional semantic spaces was originally proposed for brain signal decoding (Mitchell et al., 2008), but it has since been extensively applied in other domains, including image labeling (Frome et al., 2013; Lazaridou et al., 2014; Socher et al., 2013) and bilingual dictionary/phrase table induction (Dinu and Baroni, 2014; Mikolov et al., 270 2013a), the two applications we focus on here. Effective zero-shot learning by cross-space mapping could get us through the manual annotation bottleneck that hampers many applications. However, in practice, the accuracy in label retrieval with current mapping methods is still too low for practical uses. In image labeling, when a search space of realistic size is considered, accuracy is just above 1% (which is still well above chance for large search spaces). In bilingual lexicon induction, accuracy reaches values around 30% (across words of varying frequency), which are definitely more encouraging, but still indicate that only 1 word in 3 will be translated correctly. In this article, we look at some general properties of the linear cross-modal mapping function standardly used for zero-shot learning, in order to achieve a better understanding of its shortcomings, and improve its quality by devising methods to overcome them. First, when the mapping function is estimated with least-squares error techniques, we observe a systematic increase in hubness (Radovanovi´c et al., 2010b), that is, in the tendency of some vectors (“hubs”) to appear in the top neighbour lists of many test items. We connect hubness to least-squares estimation, and we show how it is greatly mitigated when the mapping function is estimated with a max-margin ranking loss instead. Still, switching to max-margin greatly improves accuracy in the cross-linguistic context, but not for vision-to-language mapping. In the cross-modal setting, we observe indeed a different problem, that we name (training instance) pollution: The neighbourhoods of mapped test items are “polluted” by the target vectors used in training. This suggests that cross-modal mapping suffers from overfitting issues, and consequently from poor generalization power. Taking inspiration from domain adaptation, which addresses similar generalization concerns, and self-learning, we propose a technique to augment the training data with automatically constructed examples that force the function to generalize better. Having shown the advantages of a ranking loss, our final contribution is the adaptation of some insights from the max-margin literature to our setting, in particular concerning the choice of negative examples. This leads to further accuracy improvements. We thus conclude the paper by reporting zero-shot performances in both cross-modal and cross-language settings that are well above the curcross-linguistic cross-modal former state of art 33.0 0.5 standard mapping 29.7 1.1 max-margin - §3 39.4 1.9 data augmentation - §4 NA 3.7 negative evidence - §5 40.2 5.6 Table 1: Roadmap. Proposed changes to crossspace mapping training and resulting percentage Precision @1 in our two experimental setups. rent state of the art. Table 1 provides a roadmap and summary of our results. 2 Experimental Setup Cross-linguistic experiments In the crosslinguistic experiments, we learn a mapping from the semantic space of language A to the semantic space of language B, which can then be used for translating words outside the training set. Specifically, given the vector representation of a word in language A, we apply the mapping to obtain an estimate of the vector representation of its meaning in language B, returning the nearest neighbour of the mapped vector in the B space as candidate translation. We focus on translating from English to Italian and adopt the setup (word vectors, training and test data) of Dinu et al. (2015). For a set of 200K words, 300-dimensional vectors were built using the word2vec toolkit,1 choosing the CBOW method.2 CBOW, which learns to predict a target word from the ones surrounding it, produces state-of-the-art results in many linguistic tasks (Baroni et al., 2014). The word vectors were induced from corpora of 2.8 and 1.6 billion tokens, respectively, for English and Italian.3 The train and test English-to-Italian translation pairs were extracted from a Europarl-derived dictionary (Tiedemann, 2012).4 The 5K most frequent translation pairs were used for training, while the test set includes 1.5K English words equally split into 5 frequency bins. The search for the correct translation is performed in a semantic space of 200K 1https://code.google.com/p/word2vec/ 2Other hyperparameters, which we adopted without further tuning, include a context window size of 5 words to either side of the target, setting the sub-sampling option to 1e-05 and estimating the probability of target words by negative sampling, drawing 10 samples from the noise distribution (Mikolov et al., 2013b). 3Corpus sources: http://wacky.sslmit.unibo. it, http://www.natcorp.ox.ac.uk 4http://opus.lingfil.uu.se/ 271 Italian words.5 Cross-modal experiments In the cross-modal experiments, we induce a mapping from visual to linguistic space. Specifically, given an image, we apply the mapping to its visual vector representation to obtain an estimate of its representation in linguistic space, where the word associated to the nearest neighbour is retrieved as the image label. Similarly to translation pairs in the crosslinguistic setup, we create a list of “visual translation” pairs between images and their corresponding noun labels. Our starting point are the 5.1K labels in ImageNet (Deng et al., 2009) that occur at least 500 times in our English corpus and have concreteness score ≥5, according to Turney et al. (2011). For each label, we sample 100 pictures from its ImageNet entry, and associate each picture with the 4094-dimensional layer (fc7) at the top of the pre-trained convolutional neural network model of Krizhevsky et al. (2012), using the Caffe toolkit (Jia et al., 2014). The target word space is identical to the English space used in the cross-linguistic experiment. Finally, we use 75% of the labels (and the respective images) for training and the remaining 25% of the labels for testing.6 From the 127.5K images corresponding to test labels, we sample 1K images as our test set. For zero-shot evaluation purposes, the search for the correct label is performed in the space of 5.1K possible labels, unless otherwise specified. However, when quantifying hubness and pollution, in order to have a setting comparable to that of crosslanguage mapping, we use the full set of 200K English words as search space. Learning objectives We assume that we have cross-space “translation” pairs available for a set of |Tr| items (xi, yi) = {xi ∈Rd1, yi ∈Rd2}. Moreover, following previous work, we assume that the mapping function is linear. For estimating its parameters W ∈Rd1×d2, we consider two objectives. The first is L2-penalized least squares 5Faithful to the zero-shot setup, in our experiments there is never any overlap between train and test words; however, to make the task more challenging, we include the train words in the search space, except where expressly indicated. 6At training time, we average the 100 vectors associated to a label into a single representation, to reduce training set size while minimizing information loss. At test time, as normally done, we present the model with single image visual vectors. (ridge): ˆ W = argmin W∈Rd1×d2 ∥XW −Y∥+ λ∥W∥, which has an analytical solution. The second objective is a margin-based ranking loss (max-margin) similar in spirit to the one used in similar cross-modal experiments with WSABIE (Weston et al., 2011) and DeViSE (Frome et al., 2013). The loss for a given pair of training items (xi, yi) and the corresponding mappingbased prediction ˆyi = Wxi is defined as k X j̸=i max{0, γ + dist(ˆyi, yi) −dist(ˆyi, yj)}, where dist is a distance measure, in our case the inverse cosine, and γ and k are tunable hyperparameters denoting the margin and the number of negative examples, respectively. Intuitively, the goal of the max-margin objective is to rank the correct translation yi of xi higher than any other possible translation yj. In theory, the summation in the equation could range over all possible labels, but in practice this is too expensive (e.g., in the cross-linguistic experiments the search space contains 200K candidate labels!), and it is usually computed over just a portion of the label space. In Weston et al. (2011), the authors propose an efficient way of selecting negative examples, in which they randomly sample, for each training item, labels from the complete set, and pick as negative sample the first label violating the margin. This guarantees that there will be exactly as many weight updates as training items. Another possibility is proposed in Mikolov et al. (2013b), where negative samples are picked from a nonitem specific distribution (e.g., the uniform distribution).7 For the experiments in Sections 3 and 4, we follow a more general setup in which the size of the margin and number of negative samples is tuned for each task. In this way, for a sufficiently large margin and number of negative samples, we increase the probability of performing a weight update per training item. We estimate the mapping parameters W with stochastic gradient descent and per-parameter learning rates tuned with Adagrad (Duchi et al., 2011). The tuning of hyperparameters γ and k is performed on a random 25% subset of the training data. 7The notion of negative samples is not unique to marginbased learning; in Mikolov et al. (2013b), the authors used it to efficiently estimate a word probability distribution. 272 0 10 20 30 40 50 0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01 Hubness in Cross−lingual Experiment N20 values Pr(N20) ridge max−margin gold 5 10 15 20 25 30 35 40 0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.01 Hubness in Cross−modal Experiment N20 values Pr(N20) ridge max−margin gold Figure 1: Hubness distribution in cross-linguistic (left) and cross-modal (right) search spaces. The hubness score (N20) is computed on the top-20 neighbour lists of the test items, using their original (gold), ridge- or max-margin-mapped vectors as query terms. 3 Hubness High-dimensional spaces are often affected by hubness (Radovanovi´c et al., 2010b; Radovanovi´c et al., 2010a), that is, they contain certain elements – hubs – that are near many other points in space without being similar to the latter in any meaningful way. As recently noted by Dinu et al. (2015), the hubness problem is greatly exacerbated when one looks at the nearest neighbours of vectors that have been mapped across spaces with ridge.8 Given a set of query vectors with the corresponding top-k nearest neighbour lists, we can quantify the degree of hubness of an item in the search space (parameterized by k) by the number of lists in which it occurs. Nk(y), the hubness at k of an item y, is computed as follows: Nk(y) = |{x ∈T|y ∈NNk(x, S)}|, where S denotes the search space, T denotes the set of query items and NNk(x, S) denotes the k nearest neighbors of x in S. Figure 1 reports N20 distributions across the cross-linguistic and cross-modal search spaces, using the respective test items as query vectors. The blue line shows the distributions for the “gold” vectors (that is, the vectors in the target space we would like to approximate). The red line shows the same distributions when neighbours are 8Dinu et al. (2015) observe, but do not attempt to understand hubness, as we do here. They propose to address it with methods to re-rank neighbour lists, which are less general and should be largely complementary to our effort to improve estimation of the cross-mapping function. Cross-linguistic Cross-modal blockmonthon (50) smilodon (40) hashim (28) pintle (33) akayev (27) knurled (27) autogiustificazione (27) handwheel (24) limassol (26) circlip (23) regulars (26) black-footed (23) 18 (25) flatbread (22) Table 2: Top ridge hubs, together with N20 scores. Note that cross-linguistic hubs are supposed to be Italian words. queried for the ridge-mapped test vectors (ignore black lines for now). In both spaces, when the query vectors are mapped, hubness increases dramatically. The largest hubs for the original test items occur in 15 neighbour lists or less. With the mapped vectors, we find hubs occurring in 40 lists or more. The figure also shows that, in both spaces, we observe more points with smaller but non-negligible N20 (e.g., around 10) when mapped vectors are queried. In both spaces, the difference in hubness is very significant according to a cross-tab test (p<10−30). Finally, as Table 2 shows, the largest hubs are by no means terms that we might expect to occur as neighbours of many other items on semantic grounds (e.g., very general terms), but rather very specific and rare words whose high hubness cannot possibly be a genuine semantic property. Causes of hubness Why should the mapping function lead to an increase in hubness? We conjecture that this is due to an intrinsic property of least-squares estimation. Given the training ma273 trices X and Y, and the projection matrix W obtained by minimizing squared error, each column ˆy∗,i of ˆY = XW is the orthogonal projection of y∗,i, the corresponding Y column onto the column space of X (Strang, 2003, Ch. 4). Consequently, y∗,i = ϵi + ˆy∗,i, where the ϵi error vector is orthogonal to ˆy∗,i. It follows that ||y∗,i||2 ≥= ||ˆy∗,i||2. Since y∗,i and ˆy∗,i have equal means (because the error terms in ϵi must sum to 0), it immediately follows from the squared length inequality that ˆy∗,i has lower or equal variance to y∗,i. Since this holds for all columns of ˆY, it follows in turn that the set of mapped vectors in ˆY has lower or equal variance to the corresponding set of original vectors in Y. Coming back to hubness, a set of lower variance points (such as the mapped vectors) will result in higher hubness since the points will on average be closer to each other. The problem is likely to be further exacerbated by the property of least-squares to ignore relative distances between points (the objective only aims at making predicted and observed vectors look like each other), Strictly, the theoretical result only holds for the training points. However, to the extent that the training set is representative of what will be encountered in the test set, it should also extend to test data (and if training and testing data are very different, the mapping function will generalize very poorly anyway). Moreover, the result holds for a pure least-squares solution, without the ridge L2 regularization term. Whether it also applies to ridge-based estimates will depend on the relative impact of the least-squares and L2 terms on the final solution (and it is not excluded that the L2 term might also independently reduce variance, of course). Empirically, we find that, indeed, lower variance also characterizes test vectors mapped with a ridge-estimated function. Interestingly, in the literature on cross-space mapping we find that authors choose a different cost function than ridge, without motivating the choice. Socher et al. (2014) mention in passing that max-margin outperforms a least-squarederror cost for cross-modal mapping. Max-margin as a solution to hubness Referring back to Figure 1, we see that when ridge estimation is replaced by max-margin (black line), there is a considerable decrease in hubness in both settings. This is directly reflected in a large increase in performance in our crosslinguistic (English-to-Italian) zero-shot task (left two columns of Table 3), with the largest improvement for the all important P@1 measure (equivalent to accuracy).9 These results are well above the current best cross-language accuracy for cross-modal mapping without added orthographic cues (33%), attained by Mikolov et al. (2013a).10 The absolute performance figures are low in the challenging cross-modal setting, but here too we observe a considerable improvement in accuracy when max-margin is applied. Indeed, we are already above the cross-modal zero-shot mapping state of the art for a search space of similar size (0.5% accuracy in Frome et al. (2013)). Still, the improvement over ridge (while present) is not as large for the less strict (higher ranks) performance scores. Table 4 confirms that the improvement brought about by max-margin is indeed (at least partially) due to hubness reduction. A large proportion of vectors retrieved as top-1 predictions (translations/labels) are hubs when mapping is trained with ridge, but the proportion drops dramatically with max-margin. Still, more than 1/5 top predictions for cross-modal mapping with max-margin are hubs (vs. less than 1/10 for the original vectors). Now, the mathematical properties we reviewed above suggest that, for least-squares estimation, hubness is caused by general reduced variance of the space after mapping. Thus, hubs should be vectors that are near the mean of the space. The first row of Table 5 confirms that the hubs found in the neighbourhoods of ridgemapped query terms are items that tend to be closer to the search space mean vector, and that this effect is radically reduced with max-margin estimation. However, the second row of the table shows another factor at play, that has a major role in the cross-modal setting, and it is only partially addressed by max-margin estimation: Namely, in vision-to-language mapping, there is a strong tendency for hubs (that, recall, have an important effect on performance, as they enter many nearest neighbour lists) to be close to a training data point. 9We have no realistic upper-bound estimate, but due to different word senses, synonymy, etc., it is certainly not 100%. 10Although the numbers are not fully comparable because of different language pairs and various methodological details, their method is essentially equivalent to our ridge approach we are clearly outperforming. 274 Cross-linguistic Cross-modal ridge max-margin ridge max-margin P@1 29.7 38.4 1.1 1.9 P@5 44.2 54.2 4.8 5.4 P@10 49.1 60.4 7.9 9.0 Table 3: Ridge vs. max-margin in zeroshot experiments. Precision @N results crosslinguistically (test items: 1.5K, search space: 200K) and cross-modally (test items: 1K, search space: 5.1K). Cross-linguistic Cross-modal ridge max-margin gold ridge max-margin gold 19.6 9.8 0.6 55.8 21.6 7.8 Table 4: Hubs as top predictions. Percentage of top-1 neighbours of test vectors in zero-shot experiments of Table 3 with N20 > 5. Cross-linguistic Cross-modal cosine with ridge max-margin ridge max-margin full-space mean 0.21 0.06 0.13 -0.01 training point 0.15 0.12 0.34 0.24 Table 5: Properties of hubs. Spearman ρ of N20 scores with cosines to mean vector of full search space (top) and nearest training item (bottom), across all search space elements. All correlations significant (p<0.001) except cross-modal max-margin hubness/full-space mean. 4 Pollution The quantitative results and post-hoc analysis of hubs in Section 3 suggest that cross-modal mapping is facing a serious generalization problem. To get a better grasp of the phenomenon, we define a binary measure of (training data) pollution for a queried item x and parameterized by k, such that pollution is 1 if x has a (target) training item y among its k nearest neighbours, 0 otherwise. Formally: Npol k,S(x) = [[∃y ∈YTr : y ∈NNk,S(x)]], where YTr is the matrix of target vectors used in training, NNk,S(y) denotes the top k neighbors of y in search space S, and [[z]] is an indicator function.11 11Pollution is of course an effect of overfitting, but we use this more specific term to refer to the tendency of training vectors to “pollute” nearest neighbour lists of mapped vectors. The average pollution Npol 1,S of all test items in the cross-modal experiment, when |S|=200K is 18%, which indicates that in 1/5 of cases the returned label is that of a training point. The equivalent statistic in the cross-linguistic experiment drops to 8.7% (words tend to be more varied than the set of concrete, imageable concepts used for image annotation tasks, and so the cross-linguistic training set is probably less uniform than the one used in the vision-to-language setting). The real extent of the generalization problem in the cross-modal setup becomes more obvious if we restrict the search space to labels effectively associated to an image in our data set (|S|=5.1K). In this case, the average pollution Npol 1,S across all test items jumps to 88%, that is, the vast majority of test images are annotated with a label coming from the training data. Clearly, there is a serious problem of overfitting to the training subspace. While we came to this observation by inspecting the properties of hubs, other work in zero-shot for image labeling has indirectly noted the same. Frome et al. (2013) empirically showed that the performance of the system is higher when removing training labels from the search space, while Norouzi et al. (2014) proposed a zero-shot method that avoids explicit cross-modal mapping. Adapting to the full search space by data augmentation High training-data pollution indicates that cross-modal mapping does not generalize well beyond the kind of data points it encountered in learning. This is a special case of the dataset bias problem (Torralba and Efros, 2011) and, given that the latter has been addressed as a domain adaptation problem (Gong et al., 2012; Donahue et al., 2013), we adopt here a similar view. Self-training has been successfully used for domain adaptation in NLP, e.g., in syntactic parsing. Given the limited amount of syntactically annotated data coming from monotonous sources (e.g., the Wall Street Journal), parsers show a big drop in performance when applied to different domains (e.g., reviews), since training and test domains differ dramatically, thus affecting their generalization performance. In a nutshell, the idea behind selftraining (McClosky et al., 2006; Reichart and Rappoport, 2007) is to use manually annotated data (xA i , .., xA N, yA i , .., yA N) from domain A to train a parser, feed the trained parser with data xB i , .., xB K from domain B in order to obtain their automated annotations ˆyB i , .., ˆyB K and then retrain the parser 275 dolphin tarantula highland whale anteater whisky orca arachnid lowland porpoise spider bagpipe cetacean opossum glen shark scorpion distillery Table 6: Visual chimeras for dolphin, tarantula and highland. with a combination of “clean” data from domain A and “noisy” data from domain B. In our setup, self-training would be applied by labeling a larger set of images with a cross-modal mapping function estimated on the initial training data, and then using both sources of labeled data to retrain the function. Although the idea of self-training for inducing cross-modal mapping functions is appealing, especially given the vast amount of unlabeled data available out there, the very low performance of current cross-modal mapping functions makes the effort questionable. We would like to exploit unannotated data representative of the search space, without relying on the output of cross-modal mapping for their annotation. One way to achieve this is to use data augmentation techniques that are representative of the search space. Data augmentation is popular in computer vision, where it is performed (among others) by data jittering, visual sampling or image perturbations. It has proven beneficial for both “deep” (Krizhevsky et al., 2012; Zeiler and Fergus, 2014) and “shallow” (Chatfield et al., 2014) systems, and it was recently introduced to NLP tasks (Zhang and LeCun, 2015). Specifically, in order to train the mapping function using both annotated data and points that are representative of the full search space, we rely on a form of data augmentation that we call visual chimera creation. For every item yi /∈YTr in the search space S, we use linguistic similarity as a proxy of visual similarity, and create its visual vector ˆxi by averaging the visual vectors corresponding to the nearest words in language space that do occur as labels in the training set. Table 6 presents some examples of visual chimeras. For yi=dolphin, the visual vectors of other cetacean mamnone chimera-5 chimera-10 P@1 1.9 3.7 3.2 P@5 5.4 10.9 10.5 P@10 9.0 15.8 15.9 Table 7: Cross-modal zero-shot experiment with data augmentation. Labeling precision @N with no data augmentation (none) and when using top 5 (chimera-5) and top 10 (chimera-10) nearest neighbors from training set of each item in the search space to build the corresponding chimeras (1K test items, 5.1K search space). mals are averaged to create the chimera ˆxi. Since linguistic similarity is not always determined by visual factors, the method also produces noisy data points. For yi=tarantula, opossums enter the picture, while for yi=highland images of “topically” similar concepts are used (e.g., bagpipe). Table 7 reports cross-modal zero-shot labeling when training with max-margin and data augmentation. We experiment with visual chimeras constructed using 5 vs. 10 nearest neighbours. While the examples above suggest that the process injects some noise in the training data, we also observe a decrease of pollution Npol 1,S from 88% when using the “clean” training data, to 71% and 73% when expanding them with chimeras (for chimera-5 and chimera-10, respectively). Reflecting this drop in pollution, we see large improvements in precision at all levels, when chimeras are used (no big differences between 5 or 10 neighbours). The improvements brought about by the chimera method are robust. First, Table 8 reports performance when the search space excludes the training labels, showing that data augmentation is beneficial beyond mitigating the bias in favor of the latter. In this setup, chimera-5 is clearly outperforming chimera-10 (longer neighbour lists will include more noise), and we focus on it from here on. All experiments up to here follow the standard cross-modal zero-shot protocol, in which the search space is given by the union of the test and training labels, or a subset thereof. Next, we make the task more challenging by increasing it with 1K extra elements acting as distractors. The distractors are either randomly sampled from our usual 200K English word space, or, in the most challenging scenario, picked among those words, in the same space, that are among the top-5 near276 none chimera-5 chimera-10 P@1 6.7 9.3 8.3 P@5 21.7 25.2 21.3 P@10 29.9 34.3 29.7 Table 8: Cross-modal zero-shot experiment with data augmentation, disjoint train/search spaces. Same setup as Table 8, but search space excludes training elements (1K test items, 1K search space). random related none chimera-5 none chimera-5 P@1 0.8 3.3 1.9 2.8 P@5 5.3 9.0 4.8 8.8 P@10 8.8 13.3 7.9 12.6 Table 9: Cross-modal zero-shot experiment with data augmentation, enlarged search space. Labeling precision @N with no data augmentation (none) and when using top 5 (chimera-5) nearest neighbors from training set of each item in the search space to build the corresponding chimeras. Test items: 1K. Search space: 5.1K+1K extra distractors from a 200K word space, either randomly picked (random), or related to the training items. est neighbours of a training element. Again, we create one visual chimera for each label in the search space. Results are presented in Table 9. As expected, performance is negatively affected with both plain and data-augmented models, but the latter is still better in absolute terms. While chimera-5 undergoes a larger drop when the search contains many elements similar to the training data (“related” column), which is explained by the fact that visual chimeras will often include the distractor items of this setup, it appears to be more resistant against random labels, which in many cases are words that bear no resemblance to the training data (e.g., naushad, yamato, 13-14). The picture when using no data augmentation is exactly the opposite, with the model being more harmed, at P@1, by the random labels. Finally, Table 10 presents results in the crosslinguistic setup, when applying the same data augmentation technique. In this case, we augment the 5K training elements with 11.5K chimeras, for the 1.5K test elements and 10K randomly sampled distractors. For these 11.5K elements, we associate their Italian (target space) label yi with a none chimera-5 P@1 38.4 31.1 P@5 54.2 46.1 P@10 60.4 51.3 Table 10: Cross-linguistic zero-shot experiment with data augmentation. Translation precision @N when learning with max-margin and no data augmentation (none) or data augmentation using the top 5 (chimera-5) nearest neighbors of 11.5K items in the 200K-word search space (1.5K test items). cat dog truck Figure 2: Looking for intruders. We pick truck rather than dog as negative example for cat. “pseudo-translation” vector ˆxi obtained by averaging the vectors of the English (source space) translations of the nearest Italian words to yi included in the training set. Results, in Table 10, show that in this case our data augmentation method is actually hampering performance. We saw that pollution affects the cross-linguistic setup much less than it affects the cross-modal one, and we conjecture that, consequently, in the translation task, there is not a large-enough generalization gain to make up for the extra noise introduced by augmentation. 5 Picking informative negative examples An interesting feature of the ranking max-margin objective lies in its active use of negative examples. While previous work in cross-space mapping has paid little attention to the properties that negative samples should possess, this has not gone unnoticed in the NLP literature on structured prediction tasks. Smith and Eisner (2005) propose a contrastive estimation framework in the context of POS-tagging, in which positive evidence derived from gold sentence annotations is extended with negative evidence derived by various neighbourhood functions that corrupt the data in particular ways (e.g., by deleting 1 word). Having shown the effectiveness of max-margin estimation in the previous sections, we now take 277 Cross-linguistic Cross-modal random intruder random intruder P@1 38.4 40.2 3.7 5.6 P@5 54.2 55.5 10.9 12.4 P@10 60.4 61.8 15.8 17.8 Table 11: Random vs. intruding negative examples. Zero-shot precision @N results when crossspace function is estimated using max-margin with random or “intruder” negative examples, crosslinguistically (test items: 1.5K, search space: 200K) and cross-modally (test items: 1K, search space: 5.1K). a first step towards engineering the negative evidence exploited by this method, in the context of inducing cross-space mapping functions. In particular, our idea is that, given a training instance xi, an informative negative example would be near the mapped vector ˆyi, but far from the actual gold target space vector yi. Intuitively, such “intruders” correspond to cases where the mapping function is getting the predictions seriously wrong, and thus they should be very informative in “correcting” the function mapping trajectories. This can seen as a vector-space interpretation of the max-loss update protocol (Crammer et al., 2006) that picks negative samples expected to harm performance more. Figure 2 illustrates the idea with a cartoon example. If cat is the gold target vector yi and ˆyi the corresponding mapped vector, then we are going to pick truck as negative example, since it is an intruder (near the mapped vector, far from the gold one). More formally, at each step of stochastic gradient descent, given a source space vector xi, its target gold label/translation yi in YTr and the mapped vector ˆyi, we compute sj = cos(ˆyi, yj) − cos(yi, yj), for all vectors yj in YTr s.t. j ̸= i, and pick as negative example for xi the vector with the largest sj. Table 11 presents zero-shot mapping results when intruding negative examples are used for max-margin estimation. For cross-modal mapping, we apply data augmentation as described in the previous section. While the absolute performance increase is relatively small (less than 2% in both setups), it is consistent. Furthermore, the proposed protocol results in lower Npol 1,S pollution in the cross-modal setup (from 71% to 63%). Finally, we observe that the learning behaviour of the two Number of Epochs 0 5 10 15 20 25 30 35 40 45 50 Precision@1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 random intruder Figure 3: Learning curve with random or intruding negative samples in the cross-linguistic experiment. protocols (intruders vs. random) is different; the intruder approach is already achieving good performance after just few training epochs, since it can rely on more informative negative samples (see Figure 3). 6 Conclusion We have considered some general mathematical and empirical properties of linear cross-space mapping functions, suggesting one well-known (max-margin estimation) and two new (chimera augmentation and “intruder” negative sample adjustment) methods to improve their performance. With them, we achieve results well above the state of the art in both the cross-linguistic and the crossmodal setting. Both chimera and the intruder methods are flexible, and we plan to explore them further in future research. In particular, we want to devise more semantically-motivated methods to select chimera components and negative samples. Acknowledgments We thank Adam Liska, Yoav Goldberg and the anonymous reviewers for useful comments. We acknowledge ERC 2011 Starting Independent Research Grant n. 283554 (COMPOSES). References Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of ACL, pages 238–247, Baltimore, MD. 278 Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman. 2014. Return of the devil in the details: Delving deep into convolutional nets. arXiv preprint arXiv:1405.3531. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. The Journal of Machine Learning Research, 7:551–585. Jia Deng, Wei Dong, Richard Socher, Lia-Ji Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Proceedings of CVPR, pages 248–255, Miami Beach, FL. Georgiana Dinu and Marco Baroni. 2014. How to make words with vectors: Phrase generation in distributional semantics. In Proceedings of ACL, pages 624–633, Baltimore, MD. Georgiana Dinu, Angeliki Lazaridou, and Marco Baroni. 2015. Improving zero-shot learning by mitigating the hubness problem. In Proceedings of ICLR Workshop Track, San Diego, CA. Published online: http://www.iclr.cc/doku.php?id= iclr2015:main. Jeff Donahue, Judy Hoffman, Erik Rodner, Kate Saenko, and Trevor Darrell. 2013. Semi-supervised domain adaptation with instance constraints. In In Proceedings of CVPR, pages 668–675. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeff Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A deep visual-semantic embedding model. In Proceedings of NIPS, pages 2121–2129, Lake Tahoe, NV. Boqing Gong, Yuan Shi, Fei Sha, and Kristen Grauman. 2012. Geodesic flow kernel for unsupervised domain adaptation. In In Proceedings of CVPR, pages 2066–2073. Kristen Grauman and Bastian Leibe. 2011. Visual Object Recognition. Morgan & Claypool, San Francisco. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional architecture for fast feature embedding. arXiv preprint arXiv:1408.5093. Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of NIPS, pages 1097–1105, Lake Tahoe, Nevada. Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world. In Proceedings of ACL, pages 1403–1414, Baltimore, MD. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proceedings of HLT-NAACL, pages 152–159. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013a. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of NAACL, pages 746–751, Atlanta, Georgia. Tom Mitchell, Svetlana Shinkareva, Andrew Carlson, Kai-Min Chang, Vincente Malave, Robert Mason, and Marcel Just. 2008. Predicting human brain activity associated with the meanings of nouns. Science, 320:1191–1195. Mohammad Norouzi, Tomas Mikolov, Samy Bengio, Yoram Singer, Jonathon Shlens, Andrea Frome, Greg S Corrado, and Jeffrey Dean. 2014. Zero-shot learning by convex combination of semantic embeddings. In Proceedings of ICLR. Mark Palatucci, Dean Pomerleau, Geoffrey Hinton, and Tom Mitchell. 2009. Zero-shot learning with semantic output codes. In Proceedings of NIPS, pages 1410–1418, Vancouver, Canada. Miloˇs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010a. Hubs in space: Popular nearest neighbors in high-dimensional data. Journal of Machine Learning Research, 11:2487–2531. Miloˇs Radovanovi´c, Alexandros Nanopoulos, and Mirjana Ivanovi´c. 2010b. On the existence of obstinate results in vector space models. In Proceedings of SIGIR, pages 186–193, Geneva, Switzerland. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In In Proceedings of ACL, pages 616–623. Noah A Smith and Jason Eisner. 2005. Contrastive estimation: Training log-linear models on unlabeled data. In Proceedings of ACL, pages 354–362. Richard Socher, Milind Ganjoo, Christopher Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings of NIPS, pages 935–943, Lake Tahoe, NV. Richard Socher, Quoc Le, Christopher Manning, and Andrew Ng. 2014. Grounded compositional semantics for finding and describing images with sentences. Transactions of the Association for Computational Linguistics, 2:207–218. Gilbert Strang. 2003. Introduction to linear algebra, 3d edition. Wellesley-Cambridge Press, Wellesley, MA. 279 J¨org Tiedemann. 2012. Parallel data, tools and interfaces in OPUS. In Proceedings of LREC, pages 2214–2218. Antonio Torralba and Alexei A Efros. 2011. Unbiased look at dataset bias. In In Proceedings of CVPR, pages 1521–1528. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Peter Turney, Yair Neuman, Dan Assaf, and Yohai Cohen. 2011. Literal and metaphorical sense identification through concrete and abstract context. In Proceedings of EMNLP, pages 680–690, Edinburgh, UK. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of IJCAI, pages 2764– 2770. Matthew Zeiler and Rob Fergus. 2014. Visualizing and understanding convolutional networks. In Proceedings of ECCV (Part 1), pages 818–833, Zurich, Switzerland. Xiang Zhang and Yann LeCun. 2015. Text understanding from scrath. arXiv preprint arXiv:1502.01710. 280
2015
27
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 281–291, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Generalisation of Lexical Functions for Composition in Distributional Semantics Antoine Bride IRIT & Universit´e de Toulouse [email protected] Tim Van de Cruys IRIT & CNRS, Toulouse [email protected] Nicholas Asher IRIT & CNRS, Toulouse [email protected] Abstract Over the last two decades, numerous algorithms have been developed that successfully capture something of the semantics of single words by looking at their distribution in text and comparing these distributions in a vector space model. However, it is not straightforward to construct meaning representations beyond the level of individual words – i.e. the combination of words into larger units – using distributional methods. Our contribution is twofold. First of all, we carry out a largescale evaluation, comparing different composition methods within the distributional framework for the cases of both adjectivenoun and noun-noun composition, making use of a newly developed dataset. Secondly, we propose a novel method for composition, which generalises the approach by Baroni and Zamparelli (2010). The performance of our novel method is also evaluated on our new dataset and proves competitive with the best methods. 1 Introduction In the course of the last two decades, there has been a growing interest in distributional methods for lexical semantics (Landauer and Dumais, 1997; Lin, 1998; Turney and Pantel, 2010). These methods are based on the distributional hypothesis (Harris, 1954), according to which words that appear in the same contexts tend to be similar in meaning. Inspired by Harris’ hypothesis, numerous researchers have developed algorithms that try to capture the semantics of individual words by looking at their distribution in a large corpus. Compared to manual studies common to formal semantics, distributional semantics offers substantially larger coverage since it is able to analyze massive amounts of empirical data. However, it is not trivial to combine the algebraic objects created by distributional semantics to get a sensible distributional representation for more complex expressions, consisting of several words. On the other hand, the formalism of the λ-calculus provides us with general, advanced and efficient methods for composition that can model meaning composition not only of simple phrases, but also more complex phenomena such as coercion or composition with fine-grained types (Asher, 2011; Luo, 2010; Bassac et al., 2010). Despite continued efforts to find a general method for composition and various approaches for the composition of specific syntactic structures (e.g. adjective-noun composition, or the composition of transitive verbs and direct objects (Mitchell and Lapata, 2008; Coecke et al., 2010; Baroni and Zamparelli, 2010)), the modeling of compositionality is still an important challenge for distributional semantics. Moreover, the validation of proposed methods for composition has used relatively small datasets of human similarity judgements (Mitchell and Lapata, 2008).1 Although such studies comparing similarity judgements have their merits, it would be interesting to have studies that evaluate methods for composition on a larger scale, using a larger test set of different specific compositions. Such an evaluation would allow us to evaluate more thoroughly the different methods of composition that have been proposed. This is one of the goals of this paper. To achieve this goal, we make use of two different resources. We have constructed a dataset for French containing a large number of pairs of a compositional expression (adjective-noun) and a single noun that is semantically close or identical to the composed expression. These pairs have been extracted semi-automatically from 1A notable exception is (Marelli et al., 2014), who propose a large-scale evaluation dataset for composition at the sentence level. 281 the French Wiktionary. We have also used the Semeval 2013 dataset of phrasal similarity judgements for English with similar pairs extracted semi-automatically from the English Wiktionary to construct a dataset for English for both adjective-noun and noun-noun composition. This affords us a cross-linguistic comparison of the methods. These data sets provide a substantial evaluation of the performance of different compositional methods. We have tested three different methods of composition proposed in the literature, viz. the additive and multiplicative model (Mitchell and Lapata, 2008), as well as the lexical function approach (Baroni and Zamparelli, 2010). The two first methods are entirely general, and take as input automatically constructed vectors for adjectives and nouns. The method by Baroni and Zamparelli, on the other hand, requires the acquisition of a particular function for each adjective, represented by a matrix. The second goal of our paper is to generalise the functional approach in order to eliminate the need for an individual function for each adjective. To this goal, we automatically learn a generalised lexical function, based on Baroni and Zamparelli’s approach. This generalised function combines with an adjective vector and a noun vector in a generalised way. The performance of our novel generalised lexical function approach is evaluated on our test sets and proves competitive with the best, extant methods. Our paper is organized as follows. First, we discuss the different compositional models that we have evaluated in our study, briefly revisiting the different existing methods for composition, followed by a description of our generalisation of the lexical function approach. Next, we report on our evaluation method and its results. The results section is followed by a section that discusses work related to ours. Lastly, we draw conclusions and lay out some avenues for future work. 2 Composition methods 2.1 Simple Models of Composition In this section, we describe the composition models for the adjective-noun case. The extension of these models to the noun-noun case is straightforward; one just needs to replace the adjective by the subordinate noun. Admittedly, choosing which noun is subordinate in noun-noun composition may be an interesting problem but it is outside the scope of this paper. We tested three simple models of composition: a baseline method that discounts the contribution of the adjective completely, and the additive and multiplicative models of composition. The baseline method is defined as follows: Compbaseline(adj, noun) = noun The additive model adds the point-wise values of the adjective vector adj and noun vector noun using independent coefficients to provide a result for the composition: Compadditive(adj, noun) = α noun+β adj The multiplicative model consists in a pointwise multiplication of the vectors adj and noun: Compmultiplicative(adj, noun) = noun⊗adj with (noun⊗adj)i = nouni ×adji 2.2 The lexical function model Baroni and Zamparelli’s (2010) lexical function model (LF) is somewhat more complex. Adjective-noun composition is modeled as the functional application of an adjective meaning (represented as a matrix) to a noun meaning (represented as a vector). Thus, the combination of an adjective and noun is the product of the matrix ADJ and the vector noun as shown in Figure 1. Baroni and Zamparelli propose learning an adjective’s matrix from examples of the vectors for adj noun obtained directly from the corpus. These vectors adj noun are obtained in the same way as vectors representing a single word: when the adjective-noun combination occurs, we observe its context and construct the vector from those observations. As an illustration, consider the example in 2. The word name appears three times modified by an adjective in the following excerpt from Oscar Wilde’s The Importance of Being Earnest. This informs us about the cooccurrence frequencies of three vectors: one for divine name, another for nice name, and one for charming name. Once the adj noun vectors have been created for a given adjective, we are able to calculate the ADJ matrix using a least squares regression that minimizes the equation ADJ×adj noun −noun. More formally, the problem is the following: Find ADJ s.t. ∑noun(ADJ ×noun−adj noun)2 is minimal 282 × = ADJECTIVE noun CompositionLF(adjective, noun) Figure 1: Lexical Function Composition Jack: Personally, darling, to speak quite candidly, I don’t much care about the name of Ernest . . . I don’t think the name suits me at all. Gwendolen: It suits you perfectly. It is a divine [name]. It has a music of its own. It produces vibrations. Jack: Well, really, Gwendolen, I must say that I think there are lots of other much nicer [names]. I think Jack, for instance, is a charming [name]. Figure 2: Excerpt from Oscar Wilde’s The Importance of Being Earnest For our example, we would minimize, among others DIVINE×divine name−name to get the matrix for DIVINE. LF requires a large corpus, because we have to observe a sufficient number of examples of the adjective and noun combined, which are perforce less exemplified than the presence of the noun or adjective in isolation. In Figure 2, each of the occurrences of ‘name’ can contribute to the information in the vector name but none can contribute to the vector evanescent name. Baroni and Zamparelli (2010) offer an explanation of how to cope with the potential sparse data problem for learning matrices for adjectives. Moreover, recent evaluations of LF show that existent corpora have enough data for it to provide a semantics for the most frequent adjectives and obtain better results than other methods (Dinu et al., 2013b). Nevertheless, LF has limitations in treating relatively rare adjectives. For example, the adjective ‘evanescent’ appears 359 times in the UKWaC corpus (Baroni et al., 2009). This is enough to generate a vector for evanescent, but may not be sufficient to generate a sufficient number of vectors evanescent noun to build the matrix EVANESCENT. More importantly, for noun-noun combinations, one may need to have a LF for a combination. To get the meaning of blood donation campaign in the LF approach, the matrix BLOOD DONATION must be combined to the vector campaign. Learning this matrix would require to build vectors blood donation noun for many nouns. Even if it were possible, the issue would arise again for blood donation campaign plan, then for blood donation campaign plan meeting and so forth. In addition, LF’s approach to adjectival meaning and composition has a theoretical drawback. Like Montague Grammar, it supposes that the effect of an adjective on a noun meaning is specific to the adjective (Kamp, 1975). However, recent studies suggest that the Montague approach overgeneralises from the worst case, and that the vast majority of adjectives in the world’s languages are subsective, suggesting that the modification of nominal meaning that results from their composition with a noun follows general principles (Partee, 2010; Asher, 2011) that are independent of the presence or absence of examples of association. 2.3 Generalised LF To solve these problems, we generalise LF and replace individual matrices for adjectival meanings by a single lexical function: a tensor for adjectival composition A .2 Our proposal is that adjectivenoun composition is carried out by multiplying the tensor A with the vector for the adjective adj, followed by a multiplication with the vector noun, c.f. Figure 3. The product of the tensor A and the vector adj yields a matrix dependent of the adjective that is multiplied with the vector noun. This matrix corresponds to the LF matrix ADJ. As indicated in Figure 4, we obtain A with the help of matrices obtained from the LF approach, and from vectors for single words easily obtained in distributional semantics; we perform a least square regression minimizing the norm of the matrices generated by the equations in Figure 4. Formally, the problem is 2A tensor generalises a matrix to several dimensions. We use a tensor in three modes. For an introduction to tensors, see (Kolda and Bader, 2009). 283 = A djective × a dj ! × noun ∀adjective, noun CompositionGLF(adjective, noun) Figure 3: Composition in the generalised lexical function model Find A s.t. ∑adj(A ×adj−ADJ)2 is minimal Note that our tensor is not just the compilation of the information found in the LF matrices: the adjective mode of our tensor has a limited number of dimensions, whereas the LF approach creates a separate matrix for each individual adjective. This reduction forces the model to generalise, and we hypothesise that this generalisation allows us to make proper noun modifications even in the light of sparse data. Our approach requires learning a significant number of matrices ADJ. This is not a problem, since FRWaC and UKWaC provide sufficient data for the LF approach to generate matrices for a significant number of adjectives. For example, the 2000th most frequent adjective in FRWaC (‘fasciste’) has more than 4000 occurrences. To return to our example of blood donation campaign, once the tensor N for noun-noun composition is learned, our approach requires only the knowledge of the vectors blood, donation and campaign. We would then perform the following computations: blood donation = (N ×blood)×donation blood donation campaign = (N ×blood donation)×campaign and this allows us to avoid the sparse data problem for the LF approach in generating the matrix BLOOD DONATION. Once we have obtained the tensor A , we verify experimentally its relevance to composition, in order to check whether a tensor optimising the equations in Figure 4 would be semantically interesting. 3 Evaluation 3.1 Tasks description In order to evaluate the different composition methods, we constructed test sets for French and English, inspired by the work of Zanzotto et al. (2010) and the SEMEVAL-2013 task evaluating phrasal semantics (Korkontzelos et al., 2013). The task is to make a judgement about the semantic similarity of a short word sequence (an adjectivenoun combination) and a single noun. This is important, as composition models need to be able to treat word sequences of arbitrary length. Formally, the task is presented as: With comp = composition(adj, noun1) Evaluate similarity(comp, noun2) where the ‘composition’ function is carried out by the different composition models. ‘Similarity’ needs to be a binary function, with return values ‘similar’ and ‘non-similar’. Note, however, that the distributional approach yields a continuous similarity value (such as the cosine similarity between two vectors). In order to determine which cosine values correspond to ‘similar’ and which cosine values correspond to ‘non-similar’, we looked at a number of examples from a development set. More precisely, we carried out a logistic regression on 50 positive and 50 negative examples (separate from our test set) in order to automatically learn the threshold at which a pair is considered to be similar. Finally, we decided to use balanced test sets containing as many positive instances as negative ones. The test set is constructed in a semi-automatic way, making use of the canonical phrasing of dictionary definitions. Take for example the definition of bassoon in the English Wiktionary3, presented in Figure 5. It is quite straightforward to extract the pair (musical instrument,bassoon) from this definition. Using a large dictionary (such as Wiktionary), it is then possible to extract a large number of positive – i.e. similar – (adjective noun,noun) pairs. For the construction of our test set for French, we downloaded all entries of the French Wiktionary (Wiktionnaire) and annotated them with 3http://en.wiktionary.org/wiki/bassoon, accessed on 26 February 2015. 284 Find tensor A by minimizing: A djective × re d − RED , A djective × s low − SLOW ... Figure 4: Learning the A djective tensor bassoon /b@"su:n/ (plural bassoons) 1. A musical instrument in the woodwind family, having a double reed and, playing in the tenor and bass ranges. Figure 5: Definition of bassoon, extracted from the English Wiktionary part of speech tags, using the French part of speech tagger MElt (Denis et al., 2010). Next, we extracted all definitions that start with an adjectivenoun combination. As a final step, we filtered all instances containing words that appear too infrequently in our FRWaC corpus.4 The automatically extracted instances were then checked manually, and all instances that were considered incorrect were rejected. This gave us a final test set of 714 positive examples. We also created an initial set of negative examples, where we combined an existing combination of adjective noun1 (extracted from the French Wiktionary), with a randomly selected noun noun2. Again, we verified manually that the resulting (adjective noun1, noun2) pairs constituted actual negative examples. We then created a second set of negative examples by randomly selecting two nouns (noun1,noun2) and one adjective adjective. The resulting pairs (adjective noun1, noun2) were verified manually. In addition to our new test set for French, we also experimented with the original test set of the SEMEVAL-2013 task evaluation phrasal semantics for English. However, the original test set lacked human oversight as ‘manly behavior’ was considered similar to ‘testosterone’ for example. We thus hand-checked the test set ourselves and extracted 652 positive pairs. The negative pairs from the original SEMEVAL2013 are a combination of a random noun and a 4i.e. less than 200 times for adjectives and less than 1500 times for nouns random adjective-noun compositon found in the English Wiktionary. We used it as our first set of English negative examples as it is similar in construction to our first set of negative examples in French. In addition, we created a completely random negative test set for English in the same fashion we did for the second negative test set for French. Finally, the original test set also contains nounnoun compounds so we also created a test set for that. This gave us 226 positive and negative pairs for the noun-noun composition. 3.2 Semantic space construction In this section, we describe the construction of our semantic space. Our semantic space for French was built using the FRWaC corpus (Baroni et al., 2009) – about 1,6 billion words of web texts – which has been tagged with MElt tagger (Denis et al., 2010) and parsed with MaltParser (Nivre et al., 2006a), trained on a dependency-based version of the French treebank (Candito et al., 2010). Our semantic space for English has been built using the UKWaC corpus (Baroni et al., 2009), which consists of about 2 billion words extracted from the web. The corpus has been part of speech tagged and lemmatized with Stanford Part-OfSpeech Tagger (Toutanova and Manning, 2000; Toutanova et al., 2003), and parsed with MaltParser (Nivre et al., 2006b) trained on sections 2-21 of the Wall Street Journal section of the Penn Treebank extended with about 4000 ques285 positive examples random negative examples Wiktionary-based negative examples (mot court, abr´eviation) (importance fortuit, gamme) (jugement favorable, discorde) ‘short word’, ‘abbreviation’ ‘accidental importance’, ‘range’ ‘favorable judgement’, ‘discord’ (ouvrage litt´eraire, essai) (penchant autoritaire, ile) (circonscription administratif, fumier) ‘literary work’, ‘essay’ ‘authoritarian slope’, isle’ ‘administrative district’, ‘manure’ (compagnie honorifique, ordre) (auspice aviaire, ponton) (mention honorable, renne) ‘honorary company’, ‘order’ ‘avian omen’, ‘pontoon’ ‘honorable mention’, ‘reindeer’ Table 1: A number of examples from our test set for French tions from the QuestionBank5. For both corpora, we extracted the lemmas of all nouns, adjectives and (bag of words) context words. We only kept those lemmas that consist of alphabetic characters.6 We then selected the 10K most frequent lemmas for each category (nouns, adjectives, context words), making sure to include all the words from the test set. As a final step, we created our semantic space vectors using adjectives and nouns as instances, and bag of words context words as features. The resulting vectors were weighted using positive point-wise mutual information (ppmi, (Church and Hanks, 1990)), and all vectors were normalized to unit length. We then compared the different composition methods on different versions of the same semantic space (both for French and English): the full semantic space, a reduced version of the space to 300 dimensions using singular value decomposition (svd, (Golub and Van Loan, 1996)), and a reduced version of the space to 300 dimensions using non-negative matrix factorization (nmf, (Lee and Seung, 2000)). We did so in order to test each method in its optimal conditions. In fact: • A non-reduced space contains more information. This might be beneficial for methods that are able to take advantage of the full semantic space (viz. the additive et multiplicative model). On the other hand, to be able to use the non-reduced space for the lexical function approach, one would have to learn matrices of size 10K ×10K for each adjective. This would be problematic in terms of computing time and data sparseness, as we previously noted. The same goes for our gen5http://maltparser.org/mco/english_parser/ engmalt.html 6This step generally filters out dates, numbers and punctuation, which have little interest for the distributional approach. eralised approach. • Previous research has indicated that the lexical function approach is able to achieve better results using a reduced space with svd. On the other hand, the negative values that result from svd are detrimental for the multiplicative approach. • An nmf-reduced semantic space is not detrimental for the multiplicative approach. In order to determine the best parameters for the additive model, we tested this model for different values of α and β where α +β = 17 on a development set and kept the values with the best results: α = 0.4, β = 0.6. 3.3 Data used for regression The LF approach and its generalisation need data in order to perform the least square regression. We thus created a semantic space for adjective noun and noun noun vectors using the most frequent ones in a similar way to how we created them in 3.2. Then we solved the equations in 2.2 and forth. Even though the regression data were disjoint from the test sets, for each pair, we removed some of the data that may cause overfitting. For the lexical function tests, we remove the adjective noun vector corresponding to the test pair from the regression data. For example, we do not use short word to learn SHORT for the (short word, abbrevation) pair. For the generalised lexical function tests, we use the full regression data to learn the lexical functions used to train the tensor. However, we remove the ADJECTIVE matrix corresponding to the test pair from the (tensor) regression data. For example, we do not use SHORT to learn A for the (short word, abbreviation) pair. 7Since the vectors are normalized (cf. 3.2), this condition does not affect the generality of our test. 286 Table 2: Percentage of correctly classified pairs for (adjective noun1,noun2) for both French and English spaces. baseline multiplicative additive LF generalised LF fr en fr en fr en fr en fr en non-reduced 0.83 0.81 0.86 0.86 0.88 0.86 N/A N/A svd 0.79 0.79 0.55 0.59 0.84 0.78 0.93 0.92 0.91 0.88 nmf 0.78 0.78 0.83 0.77 0.79 0.84 0.90 0.86 0.88 0.85 (a) Negative examples are created randomly. baseline multiplicative additive LF generalised LF fr en fr en fr en fr en fr en non-reduced 0.80 0.79 0.83 0.81 0.85 0.80 N/A N/A svd 0.78 0.77 0.54 0.48 0.83 0.78 0.84 0.79 0.81 0.77 nmf 0.78 0.78 0.79 0.78 0.83 0.82 0.82 0.82 0.81 0.80 (b) Negative examples are created from existing pairs. Table 3: Percentage of correctly classified pairs for (noun2 noun1,noun3) with negative examples from existing pairs. Only the English space is tested. English space baseline multiplicative additive LF generalised LF non-reduced 0.77 0.80 0.84 N/A N/A svd 0.78 0.49 0.86 0.83 0.82 nmf 0.79 0.82 0.86 0.85 0.83 3.4 Results In this section, we present how the various models perform on our test sets. 3.4.1 General results Tables 2 & 3 give an overview of the results. Note first that the baseline approach, which compares only the two nouns and ignores the subordinate adjective or noun, does relatively well on the task (∼80% accuracy). This reflects the fact that the head noun in our pairs extracted from definitions is close to (and usually a super type of) the noun to be defined. In addition, we observe that the multiplicative method performs badly, as expected, on the semantic space reduced with svd. This confirms the incompatibility of this method with the negative values generated by svd. Indeed, multiplying two vectors with negative values term by term may yield a third vector very far away from the other two. Such a combination does not support the subsectivity of most our test pairs. Apart from that, svd and nmf reductions do not affect the methods much. Moreover, we observe that the multiplicative model performs better than the baseline but is bested by the additive model. We also see that additive and lexical functions often yield similar performance. Finally, the generalised lexical function is slightly less accurate than the lexical functions. This is an expected consequence of generalisation. Nevertheless, the generalised lexical function yields sound results confirming our intuition that we can represent adjective-noun (or noun-noun) combinations by one function. 3.4.2 Adjective-noun With random negative pairs (Table 2a), we observe that the lexical function model obtains the best results for the svd space. This result is significantly better than any other method on any of the spaces—e.g.,for French space, χ2 = 33.49, p < 0.01 when compared to the additive model for the non-reduced space which performs second. However, with non-random negative pairs (Table 2b), LF and the additive model obtain scores that are globally equivalent for their best respec287 tive conditions — in French 0.85 for the additive non-reduced model vs. 0.84 for the LF svd model, a difference that is not significant (χ2 = 0.20, p < 0.05). This seems to indicate that LF is especially efficient at separating out nonsense combinations. This may be caused by the fact that lexical functions learn from actual pairs. Thus, when an adjective noun combination is bizarre, the ADJECTIVE matrix has not been optimized to interact with the noun vector and may lead to complete non-sense — Which is a good thing because humans would analyze the combination as such. Finally, similar results in French and English confirm the intuition that distributional methods (and its composition models) are independent of the idiosyncrasies of a particular language; in particular they are as efficient for French as for English. 3.4.3 Noun-noun The noun-noun tests (Table 3) yields similar results to the adjective-noun tests. This is not so surprising since noun noun compounds in English also obey a roughly subsective property: a baseball field is still a field (though a cricket pitch is perhaps not so obviously a pitch). We can see that the accuracy increase from the baseline is higher compared to adjective-noun test on the same exact spaces (Table 2b, right values). This may be due to the fact that the subordinate noun in noun-noun combinations is more important than the adjective subordinate in adjective-noun combination. 4 Related work Many researchers have already studied and evaluated different composition models within a distributional approach. One of the first studies evaluating compositional phenomena in a systematic way is Mitchell and Lapata’s (2008) approach. They explore a number of different models for vector composition, of which vector addition (the sum of each feature) and vector multiplication (the element-wise multiplication of each feature) are the most important. They evaluate their models on a noun-verb phrase similarity task. Human annotators were asked to judge the similarity of two composed pairs (by attributing a certain score). The model’s task is then to reproduce the human judgements. Their results show that the multiplicative model yields the best results, along with a weighted combination of the additive and multiplicative model. The authors redid their study using a larger test set in Mitchell and Lapata (2010) (adjective-noun composition was also included), and they confirmed their initial results. Baroni and Zamparelli (2010) evaluate their lexical function model within a somewhat different context. They evaluated their model by looking at its capacity of reconstructing the adjective noun vectors that have not been seen during training. Their results show that their lexical function model obtains the best results for the reconstruction of the original co-occurrence vectors, followed by the additive model. We observe the same tendency in our evaluation results for French, although our results for English show a different picture. We would like to explore this discordance further in future work. Grefenstette et al. (2013) equally propose a generalisation of the lexical function model that uses tensors. Their goal is to model transitive verbs, and the way we acquire our tensor is similar to theirs. In fact, they use the LF approach in order to learn VERB OBJECT matrices that may be multiplied by a subject vector to obtain the subject verb object vector. In a second step, they learn a tensor for each individual verb, which is similar to how we learn our adjective tensor A . Coecke et al. (2010) present an abstract theoretical framework in which a sentence vector is a function of the Kronecker product of its word vectors, which allows for greater interaction between the different word features. A number of instantiations of the framework – where the key idea is that relational words (e.g. adjectives or verbs) have a rich (multi-dimensional) structure that acts as a filter on their arguments – are tested experimentally in Grefenstette and Sadrzadeh (2011a) and Grefenstette and Sadrzadeh (2011b). The authors evaluated their models using a similarity task that is similar to the one used by Mitchell & Lapata. However, they use more complex compositional expressions: rather than using compositions of two words (such as a verb and an object), they use simple transitive phrases (subject-verbobject). They show that their instantiations of the categorical model reach better results than the additive and multiplicative models on their transitive similarity task. Socher et al. (2012) present a compositional model based on a recursive neural network. Each 288 node in a syntactic tree is assigned both a vector and a matrix; the vector captures the actual meaning of the constituent, while the matrix models the way it changes the meaning of neighbouring words and phrases. They use an extrinsic evaluation, using the model for a sentiment prediction task. They show that their model gets better results than the additive, multiplicative, and lexical function approach. Other researchers, however, have published different results. Blacoe and Lapata (2012) evaluated the additive and multiplicative model, as well as Socher et al.’s (2012) approach on two different tasks: Mitchell & Lapata’s (2010) similarity task and a paraphrase detection task. They find that the additive and multiplicative models reach better scores than Socher et al.’s model. Tensors have been used before to model different aspects of natural language. Giesbrecht (2010) describes a tensor factorization model for the construction of a distributional model that is sensitive to word order. And Van de Cruys (2010) uses a tensor factorization model in order to construct a three-way selectional preference model of verbs, subjects, and objects. 5 Conclusion We have developed a new method of composition and tested it in comparison with different composition methods assuming a distributional approach. We developed a test set for French pairing nouns with adjective noun combinations very similar in meaning from the French Wiktionary. We also used an existing SEMEVAL-2013 set to create a similar test set for English both for adjective noun combination and noun noun combination. Our tests confirm that the lexical function approach by Baroni and Zamparelli performs well compared to other methods of composition, but only when the negative examples are constructed randomly. Our generalised lexical function approach fares almost equally well. It also has the advantage of being constructed from automatically acquired adjectival and noun vectors, and offers the additional advantage of countering data sparseness. However, the lexical function approach claims to perform well on more subtle cases — e.g. non-subsective combinations such as stone lion. Our test sets does not contain such cases, and so we cannot draw any conclusion on this claim. In future work, we would like to test different sizes of dimensionality reduction, in order to optimize our generalised lexical function model. Moreover, it is possible that better results may be obtained by proposing multiple generalised lexical functions, rather than a single one. We could, e.g., try to separate the intersective adjectives from non-intersective adjectives. And finally, we would like to further explore the performance of the lexical function model and generalised lexical function model on different datasets, which involve more complex compositional phenomena. 6 Acknowledgments We thank Dinu et al. (2013a) for their work on the DisSeCT toolkit8, which provides plenty of helpful functions for composition in distributional semantics. We also thank the OSIRIM platform9 for allowing us to do the computations we needed. Finally, we thank the reviewers of this paper for their insightful comments. This work is supported by a grant overseen by the French National Research Agency ANR (ANR-14-CE24-0014). References Nicholas Asher. 2011. Lexical Meaning in Context: A Web of Words. Cambridge University Press. Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 1183–1193, Cambridge, MA, October. Association for Computational Linguistics. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The wacky wide web: A collection of very large linguistically processed web-crawled corpora. Language Resources and Evaluation, 43(3):209–226. Christian Bassac, Bruno Mery, and Christian Retor´e. 2010. Towards a Type-theoretical account of lexical semantics. Journal of Logic, Language and Information, 19(2):229–245. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 546–556, Jeju Island, Korea, July. Association for Computational Linguistics. 8http://clic.cimec.unitn.it/composes/toolkit/ 9http://osirim.irit.fr/site/en 289 Marie Candito, Benoˆıt Crabb´e, Pascal Denis, et al. 2010. Statistical french dependency parsing: treebank conversion and first results. In Proceedings of the Seventh International Conference on Language Resources and Evaluation (LREC 2010), pages 1840–1847. Kenneth W. Church and Patrick Hanks. 1990. Word association norms, mutual information & lexicography. Computational Linguistics, 16(1):22–29. Bob Coecke, Mehrnoosh Sadrzadeh, and Stephen Clark. 2010. Mathematical foundations for a compositional distributed model of meaning. Lambek Festschrift, Linguistic Analysis, vol. 36, 36. Pascal Denis, Benoˆıt Sagot, et al. 2010. Exploitation d’une ressource lexicale pour la construction d’un ´etiqueteur morphosyntaxique ´etat-de-l’art du franc¸ais. In Traitement Automatique des Langues Naturelles: TALN 2010. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013a. Dissect - distributional semantics composition toolkit. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 31–36, Sofia, Bulgaria, August. Association for Computational Linguistics. Georgiana Dinu, Nghia The Pham, and Marco Baroni. 2013b. General estimation and evaluation of compositional distributional semantic models. In Proceedings of the Workshop on Continuous Vector Space Models and their Compositionality, pages 50–58, Sofia, Bulgaria, August. Association for Computational Linguistics. Eugenie Giesbrecht. 2010. Towards a matrix-based distributional model of meaning. In Proceedings of the NAACL HLT 2010 Student Research Workshop, pages 23–28. Association for Computational Linguistics. Gene H. Golub and Charles F. Van Loan. 1996. Matrix Computations (3rd Ed.). Johns Hopkins University Press, Baltimore, MD, USA. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011a. Experimental support for a categorical compositional distributional model of meaning. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 1394– 1404, Edinburgh, Scotland, UK., July. Association for Computational Linguistics. Edward Grefenstette and Mehrnoosh Sadrzadeh. 2011b. Experimenting with transitive verbs in a discocat. In Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics, pages 62–66, Edinburgh, UK, July. Association for Computational Linguistics. E. Grefenstette, G. Dinu, Y.-Z. Zhang, M. Sadrzadeh, and Baroni M. 2013. Multi-step regression learning for compositional distributional semantics. In Proceedings of the 10th International Conference on Computational Semantics (IWCS), pages 131–142, East Stroudsburg PA. Association for Computational Linguistics. Zellig S. Harris. 1954. Distributional structure. Word, 10(23):146–162. Hans Kamp. 1975. Two theories about adjectives. Formal semantics of natural language, pages 123– 155. Tamara G. Kolda and Brett W. Bader. 2009. Tensor decompositions and applications. SIAM Review, 51(3):455–500, September. Ioannis Korkontzelos, Torsten Zesch, Fabio Massimo Zanzotto, and Chris Biemann. 2013. Semeval-2013 task 5: Evaluating phrasal semantics. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 39–47, Atlanta, Georgia, USA, June. Association for Computational Linguistics. Thomas Landauer and Susan Dumais. 1997. A solution to Plato’s problem: The Latent Semantic Analysis theory of the acquisition, induction, and representation of knowledge. Psychology Review, 104:211–240. Daniel D. Lee and H. Sebastian Seung. 2000. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems 13, pages 556–562. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL98), Volume 2, pages 768–774, Montreal, Quebec, Canada. Zhaohui Luo. 2010. Type-theoretical semantics with coercive subtyping. SALT20, Vancouver. Marco Marelli, Luisa Bentivogli, Marco Baroni, Raffaella Bernardi, S Menini, and Roberto Zamparelli. 2014. Semeval-2014 task 1: Evaluation of compositional distributional semantic models on full sentences through semantic relatedness and textual entailment. In Proceedings of SemEval 2014: International Workshop on Semantic Evaluation. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. proceedings of ACL-08: HLT, pages 236–244. J. Mitchell and M. Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1429. J. Nivre, J. Hall, and J. Nilsson. 2006a. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of LREC-2006, pages 2216– 2219, Genoa, Italy. 290 Joakim Nivre, Johan Hall, and Jens Nilsson. 2006b. Maltparser: A data-driven parser-generator for dependency parsing. In Proceedings of LREC-2006, pages 2216–2219. Barbara H Partee. 2010. Privative adjectives: subsective plus coercion. B ¨AUERLE, R. et ZIMMERMANN, TE, ´editeurs: Presuppositions and Discourse: Essays Offered to Hans Kamp, pages 273– 285. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211, Jeju Island, Korea, July. Association for Computational Linguistics. Kristina Toutanova and Christopher D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pages 63–70. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of HLT-NAACL 2003, pages 252– 259. Peter Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of artificial intelligence research, 37(1):141–188. Tim Van de Cruys. 2010. A non-negative tensor factorization model for selectional preference induction. Natural Language Engineering, 16(4):417–437. Fabio Massimo Zanzotto, Ioannis Korkontzelos, Francesca Fallucchi, and Suresh Manandhar. 2010. Estimating linear models for compositional distributional semantics. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 1263–1271, Beijing, China, August. Coling 2010 Organizing Committee. 291
2015
28
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 292–301, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Simple Learning and Compositional Application of Perceptually Grounded Word Meanings for Incremental Reference Resolution Casey Kennington CITEC, Bielefeld University Universit¨atsstraße 25 33615 Bielefeld, Germany ckennington@cit-ec. uni-bielefeld.de David Schlangen CITEC, Bielefeld University Universit¨atsstraße 25 33615 Bielefeld, Germany david.schlangen@ uni-bielefeld.de Abstract An elementary way of using language is to refer to objects. Often, these objects are physically present in the shared environment and reference is done via mention of perceivable properties of the objects. This is a type of language use that is modelled well neither by logical semantics nor by distributional semantics, the former focusing on inferential relations between expressed propositions, the latter on similarity relations between words or phrases. We present an account of word and phrase meaning that is perceptually grounded, trainable, compositional, and ‘dialogueplausible’ in that it computes meanings word-by-word. We show that the approach performs well (with an accuracy of 65% on a 1-out-of-32 reference resolution task) on direct descriptions and target/landmark descriptions, even when trained with less than 800 training examples and automatically transcribed utterances. 1 Introduction The most basic, fundamental site of language use is co-located dialogue (Fillmore, 1975; Clark, 1996) and referring to objects, as in Example (1), is a common occurrence in such a co-located setting. (1) The green book on the left next to the mug. Logical semantics (Montague, 1973; Gamut, 1991; Partee et al., 1993) has little to say about this process – its focus is on the construction of syntactically manipulable objects that model inferential relations; here, e.g. the inference that there are (at least) two objects. Vector space approaches to distributional semantics (Turney and Pantel, 2010) similarly focuses on something else, namely semantic similarity relations between words or phrases (e.g. finding closeness for “coloured tome on the right of the cup”). Neither approach by itself says anything about processing; typically, the assumption in applications is that fully presented phrases are being processed. Lacking in these approaches is a notion of grounding of symbols in features of the world (Harnad, 1990).1 In this paper, we present an account of word and phrase meaning that is (a) perceptually grounded in that it provides a link between words and (computer) vision features of real images, (b) trainable, as that link is learned from examples of language use, (c) compositional in that the meaning of phrases is a function of that of its parts and composition is driven by structural analysis, and (d) ‘dialogue-plausible’ in that it computes meanings incrementally, word-by-word and can work with noisy input from an automatic speech recogniser (ASR). We show that the approach performs well (with an accuracy of 65% on a reference resolution task out of 32 objects) on direct descriptions as well as target/landmark descriptions, even when trained with little data (less than 800 training examples). In the following section we will give a background on reference resolution, followed by a description of our model. We will then describe the data we used and explain our evaluations. We finish by giving results, providing some additional analysis, and discussion. 2 Background: Reference Resolution Reference resolution (RR) is the task of resolving referring expressions (REs; as in Example (1)) to a referent, the entity to which they are intended to refer. Following Kennington et al. (2015a), this can be formalised as a function frr that, given a representation U of the RE and a representation W 1But see discussion below of recent extensions of these approaches taking this into account. 292 of the (relevant aspects of the) world, returns I∗, the identifier of one the objects in the world that is the referent of the RE. A number of recent papers have used stochastic models for frr where, given W and U, a distribution over a specified set of candidate entities in W is obtained and the probability assigned to each entity represents the strength of belief that it is the referent. The referent is then the argmax: I∗= argmax I P(I|U, W) (1) Recently, generative approaches, including our own, have been presented (Funakoshi et al., 2012; Kennington et al., 2013; Kennington et al., 2014; Kennington et al., 2015b; Engonopoulos et al., 2013) which model U as words or ngrams and the world W as a set of objects in a virtual game board, represented as a set properties or concepts (in some cases, extra-linguistic or discourse aspects were also modelled in W, such as deixis). In Matuszek et al. (2014), W was represented as a distribution over properties of tangible objects and U was a Combinatory Categorical Grammar parse. In all of these approaches, the objects are distinct and represented via symbolically specified properties, such as colour and shape. The set of properties is either read directly from the world if it is virtual, or computed (i.e., discretised) from the real world objects. In this paper, we learn a mapping from W to U directly, without mediating symbolic properties; such a mapping is a kind of perceptual grounding of meaning between W and U. Situated RR is a convenient setting for learning perceptuallygrounded meaning, as objects that are referred to are physically present, are described by the RE, and have visual features that can be computationally extracted and represented. Further comparison to related work will be discussed in Section 5. 3 Modelling Reference to Visible Objects Overview As a representative of the kind of model explained above with formula (1), we want our model to compute a probability distribution over candidate objects, given a RE (or rather, possibly just a prefix of it). We break this task down into components: The basis of our model is a model of word meaning as a function from perceptual features of a given object to a judgement about how well a word and that object “fit together”. (See Section 5 for discussion of prior uses of this “words as classifiers”-approach.) This can (loosely) be seen as corresponding to the intension of a word, which for example in Montague’s approach is similarly modelled as a function, but from possible worlds to extensions (Gamut, 1991). We model two different types of words / word meanings: those picking out properties of single objects (e.g., “green” in “the green book”), following Kennington et al. (2015a), and those picking out relations of two objects (e.g., “next to” in (1)), going beyond Kennington et al. (2015a). These word meanings are learned from instances of language use. The second component then is the application of these word meanings in the context of an actual reference and within a phrase. This application gives the desired result of a probability distribution over candidate objects, where the probability expresses the strength of belief in the object falling in the extension of the expression. Here we model two different types of composition, of what we call simple references and relational references. These applications are strictly compositional in the sense that the meanings of the more complex constructions are a function of those of their parts. Word Meanings The first type of word (or rather, word meaning) we model picks out a single object via its visual properties. (At least, this is what we use here; any type of feature could be used.) To model this, we train for each word w from our corpus of REs a binary logistic regression classifier that takes a representation of a candidate object via visual features (x) and returns a probability pw for it being a good fit to the word (where w is the weight vector that is learned and σ is the logistic function): pw(x) = σ(w⊺x + b) (2) Formalising the correspondence mentioned above, the intension of a word can in this approach then be seen as the classifier itself, a function from a representation of an object to a probability: [[w]]obj = λx.pw(x) (3) (Where [[w]] denotes the meaning of w, and x is of the type of feature given by fobj, the function computing a feature representation for a given object.) 293 We train these classifiers using a corpus of REs (further described in Section 4), coupled with representations of the scenes in which they were used and an annotation of the referent of that scene. The setting was restricted to reference to single objects. To get positive training examples, we pair each word of a RE with the features of the referent. To get negative training examples, we pair the word with features of (randomly picked) other objects present in the same scene, but not referred to by it. This selection of negative examples makes the assumption that the words from the RE apply only to the referent. This is wrong as a strict rule, as other objects could have similar visual features as the referent; for this to work, however, this has to be the case only more often than it is not. The second type of word that we model expresses a relation between objects. Its meaning is trained in a similar fashion, except that it is presented a vector of features of a pair of objects, such as their euclidean distance, vertical and horizontal differences, and binary features denoting higher than/lower than and left/right relationships. Application and Composition The model just described gives us a prediction for a pair of word and object (or pair of objects). What we wanted, however, is a distribution over all candidate objects in a given utterance situation, and not only for individual words, but for (incrementally growing) REs. Again as mentioned above, we model two types of application and composition. First, what we call ‘simple references’—which roughly corresponds to simple NPs—that refer only by mentioning properties of the referent (e.g. “the red cross on the left”). To get a distribution for a single word, we apply the word classifier (the intension) to all candidate objects and normalise; this can then be seen as the extension of the word in a given (here, visual) discourse universe W, which provides the candidate objects (xi is the feature vector for object i, normalize() vectorized normalisation, and I a random variable ranging over the candidates): [[w]]W obj = normalize(([[w]]obj(x1), . . . , [[w]]obj(xk))) = normalize((pw(x1), . . . , pw(xk))) = P(I|w) (4) In effect, this combines the individual classifiers into something like a multi-class logistic regression / maximum entropy model—but, nota bene, only for application. The training regime did not need to make any assumptions about the number of objects present, as it trained classifiers for a 2class problem (how well does this given object fit to the word?). The multi-class nature is also indicated in Figure 1, which shows multiple applications of the logistic regression network for a word, and a normalisation layer on top. σ(w|x1 + b) σ(w|x2 + b) σ(w|x3 + b) x1 x2 x3 Figure 1: Representation as network with normalisation layer. To compose the evidence from individual words w1, . . . , wk into a prediction for a ‘simple’ RE [srw1, . . . , wk] (where the bracketing indicates the structural assumption that the words belong to one, possibly incomplete, ‘simple reference’), we average the contributions of its constituent words. The averaging function avg() over distributions then is the contribution of the construction ‘simple reference (phrase)’, sr, and the meaning of the whole phrase is the application of the meaning of the construction to the meaning of the words: [[[srw1, . . . , wk]]]W = [[sr]]W [[w1, . . . , wk]]W = avg([[w1]]W , . . . , [[wk]]W ) (5) where avg() is defined as avg([[w1]]W , [[w2]]W ) = Pavg(I|w1, w2) with Pavg(I = i|w1, w2) = 1 2(P(I = i|w1) + P(I = i|w2)) for i ∈I (6) The averaging function is inherently incremental, in the sense that avg(a, b, c) = avg(avg(a, b), c) and hence it can be extended “on the right”. This represents an incremental model where new information from the current increment is added to what is already known, resulting in an intersective way of composing the meaning of the phrase. This cannot account for all constructions (such as negation or generally quantification), of course; we leave exploring other constructions that could occur even in our ‘simple references’ to future work. 294 Relational references such as in Example (1) from the introduction have a more complex structure, being a relation between a (simple) reference to a landmark and a (simple) reference to a target. This structure is indicated abstractly in the following ‘parse’: [rel[srw1, . . . , wk][rr1, . . . , rn][srw′ 1, . . . , w′ m]], where the w are the target words, r the relational expression words, and w′ the landmark words. As mentioned above, the relational expression similarly is treated as a classifier (in fact, technically we contract expressions such as “to the left of” into a single token and learn one classifier for it), but expressing a judgement for pairs of objects. It can be applied to a specific scene with a set of candidate objects (and hence, candidate pairs) in a similar way by applying the classifier to all pairs and normalising, resulting in a distribution over pairs: [[r]]W = P(R1, R2|r) (7) We expect the meaning of the phrase to be a function of the meaning of the constituent parts (the simple references, the relation expression, and the construction), that is: [[[rel[srw1, . . . , wk][rr][srw′ 1, . . . , w′ m]]]] = [[rel]]([[sr]][[w1 . . . wk]], [[r]], [[sr]][[w′ 1 . . . w′ m]]) (8) (dropping the indicator for concrete application, W on [[ ]], for reasons of space and readability). What is the contribution of the relational construction, [[rel]]? Intuitively, what we want to express here is that the belief in an object being the intended referent should combine the evidence from the simple reference to the landmark object (e.g., “the mug” in (1)), from the simple (but presumably deficient) reference to the target object (“the green book on the left”), and that for the relation between them (“next to”). Instead of averaging (that is, combining additively), as for sr, we combine this evidence multiplicatively here: If the target constituent contributes P(It|w1, . . . , wk), the landmark constituent P(Il|w′ 1, . . . , w′ m), and the relation expression P(R1, R2|r), with Il, It, R1 and R2 all having the same domain, the set of all candidate objects, then the combination is P(R1|w1, . . . , wk, r, w′ 1, . . . , w′ m) = X R2 X Il X It P(R1, R2|r) ∗P(Il|w′ 1, . . . , w′ m)∗ P(It|w1, . . . , wk) ∗P(R1|It) ∗P(R2|Il) (9) The last two factors force identity on the elements of the pair and target and landmark, respectively (they are not learnt, but rather set to be 0 unless the values of R and I are equal), and so effectively reduce the summations so that all pairs need to be evaluated only once. The contribution of the construction then is this multiplication of the contributions of the parts, together with the factors enforcing that the pairs being evaluated by the relation expression consist of the objects evaluated by target and landmark expression, respectively. In the following section, we will explain the data we collected and used to evaluate our model, the evaluation procedure, and the results. 4 Experiments Figure 2: Example episode for phase-2 where the target is outlined in green (solid arrow added here for presentation), the landmark outlined in blue (dashed arrow). Data We evaluated our model using data we collected in a Wizard-of-Oz setting (that is, a human/computer interaction setting where parts of the functionality of the computer system were provided by a human experimentor). Participants were seated in front of a table with 36 Pentomino puzzle pieces that were randomly placed with some space between them, as shown in Figure 2. Above the table was a camera that recorded a video feed of the objects, processed using OpenCV (Pulli et al., 2012) to segment the objects (see below for details); of those, one (or one pair) was chosen randomly by the experiment software. The video image was presented to the participant on a display placed behind the table, but with the randomly selected piece (or pair of pieces) indicated by an overlay). The task of the participant was to refer to that object using only speech, as if identifying it for a friend sitting next to the participant. The wizard 295 (experimentor) had an identical screen depicting the scene but not the selected object. The wizard listened to the participant’s RE and clicked on the object she thought was being referred on her screen. If it was the target object, a tone sounded and a new object was randomly chosen. This constituted a single episode. If a wrong object was clicked, a different tone sounded, the episode was flagged, and a new episode began. At varied intervals, the participant was instructed to “shuffle” the board between episodes by moving around the pieces. The first half of the allotted time constituted phase-1. After phase-1 was complete, instructions for phase-2 were explained: the screen showed the target and also a landmark object, outlined in blue, near the target (again, see Figure 2). The participant was to refer to the target using the landmark. (In the instructions, the concepts of landmark and target were explained in general terms.) All other instructions remained the same as phase-1. The target’s identifier, which was always known beforehand, was always recorded. For phase-2, the landmark’s identifier was also recorded. Nine participants (6 female, 3 male; avg. age of 22) took part in the study; the language of the study was German. Phase-1 for one participant and phase-2 for another participant were not used due to misunderstanding and a technical difficulty. This produced a corpus of 870 non-flagged episodes in total. Even though each episode had 36 objects in the scene, all objects were not always recognised by the computer vision processing. On average, 32 objects were recognized. To obtain transcriptions, we used Google Web Speech (with a word error rate of 0.65, as determined by comparing to a hand transcribed sample) This resulted in 1587 distinct words, with 15.53 words on average per episode. The objects were not manipulated in any way during an episode, so the episode was guaranteed to remain static during a RE and a single image is sufficient to represent the layout of one episode’s scene. Each scene was processed using computer vision techniques to obtain low-level features for each (detected) object in the scene which were used for the word classifiers. We annotated each episode’s RE with a simple tagging scheme that segmented the RE into words that directly referred to the target, words that directly referred to the landmark (or multiple landmarks, in some cases) and the relation words. For certain word types, additional information about the word was included in the tag if it described colour, shape, or spatial placement (denoted contributing REs in the evaluations below). The direction of certain relation words was normalised (e.g., left-of should always denote a landmark-target relation). This represents a minimal amount of “syntactic” information needed for the application of the classifiers and the composition of the phrase meanings. We leave applying a syntactic parser to future work. An example RE in the original German (as recognised by the ASR), English gloss, and tags for each word is given in (2). (2) a. grauer stein ¨uber dem gr¨unen m unten links b. gray block above the green m bottom left c. tc ts r l lc ls tf tf To obtain visual features of each object, we used the same simple computer-vision pipeline of object segmentation and contour reconstruction as used by Kennington et al. (2015a), providing us with RGB representations for the colour and features such as skewness, number of edges etc. for the shapes. Procedure We break down our data as follows: episodes where the target was referred directly via a ‘simple reference’ construction (DD; 410 episodes) and episodes where a target was referred via a landmark relation (RD; 460 episodes). We also test with either knowledge about structure (simple or relational reference) provided (ST) or not (WO, for “words-only”). All results shown are from 10-fold cross validations averaged over 10 runs; where for evaluations labelled RD the training data always includes all of DD plus 9 folds of RD, testing on RD. The sets address the following questions: • how well does the sr model work on its own with just words? – DD.WO • how well does the sr model work when it knows about REs? – DD.ST • how well does the sr model work when it knows about REs, but not about relations? – RD.ST (sr) • how well does the model learn relation words after it has learned about sr? RD.ST (r) • how well does the rr model work (together with the sr)? RD.ST with DD.ST (rr) Words were stemmed using the NLTK (Loper and Bird, 2002) Snowball Stemmer, reducing the 296 vocabulary size to 1306. Due to sparsity, for relation words with a token count of less than 4 (found by ranging over values in a held-out set) relational features were piped into an UNK relation, which was used for unseen relations during evaluation (we assume the UNK relation would learn a general notion of ‘nearness’). For the individual word classifiers, we always paired one negative example with one positive example. For this evaluation, word classifiers for sr were given the following features: RGB values, HSV values, x and y coordinates of the centroids, euclidean distance of centroid from the center, and number of edges. The relation classifiers received information relating two objects, namely the euclidean distance between them, the vertical and horizontal distances, and two binary features that denoted if the landmark was higher than/lower than or left/right of the target. 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 DD.
 WO DD.
 ST RD.
 ST(sr) RD.
 ST(sr+r) RD.
 ST(rr) 0.759 0.68 0.608 0.68 0.56 mean reciprocal rank 0 % 10 % 20 % 30 % 40 % 50 % 60 % 70 % DD.
 WO DD.
 ST RD.
 ST(sr) RD.
 ST(sr+r) RD.
 ST(rr) 65.3 % 55 % 42 % 54 % 40.9 % accuracy Figure 3: Results of our evaluation. Metrics for Evaluation To give a picture of the overall performance of the model, we report accuracy (how often was the argmax the gold target) and mean reciprocal rank (MRR) of the gold target in the distribution over all the objects (like accuracy, higher MRR values are better; values range between 0 and 1). The use of MRR is motivated by the assumption that in general, a good rank for the correct object is desirable, even if it doesn’t reach the first position, as when integrated in a dialogue system this information might still be useful to formulate clarification questions. Results Figure 3 shows the results. (Random baseline of 1/32 or 3% not shown in plot.) DD.WO shows how well the sr model performs using the whole utterances and not just the REs. (Note that all evaluations are on noisy ASR transcriptions.) DD.ST adds structure by only considering words that are part of the actual RE, improving the results further. The remaining sets evaluate the contributions of the rr model. RD.ST (sr) does this indirectly, by including the target and landmark simple references, but not the model for the relations; the task here is to resolve target and landmark SRs as they are. This provides the baseline for the next two evaluations, which include the relation model. In RD.ST (sr+r), the model learns SRs from DD data and only relations from RD. The performance is substantially better than the baseline without the relation model. Performance is best finally for RD.ST (rr), where the landmark and target SRs in the training portion of RD also contribute to the word models. The mean reciprocal rank scores follow a similar pattern and show that even though the target object was not the argmax of the distribution, on average it was high in the distribution. For all evaluations, the average standard deviation across the 10 runs was very small (0.01), meaning the model was fairly stable, despite the possibility of one run having randomly chosen more discriminating negative examples. Our conclusion from these experiments is that despite the small amount of training data and noise from ASR as well as the scene, the model is robust and yields respectable results. 0 2 4 6 8 10 12 14 5 0 5 10 15 20 25 Figure 5: Incremental results: average rank improves over time Incremental Results Figure 5 shows how our rr model processes incrementally, by giving the average rank of the (gold) target at each increment for the REs with the most common length in our data (13 words, of which there were 64 examples). A system that works incrementally would have a monotonically decreasing average rank as the utterance unfolds. The overall trend as shown in that 297 100 200 300 400 500 600 100 200 300 400 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 0 50 100 150 200 250 0.0 0.2 0.4 0.6 0.8 1.0 100 200 300 400 500 600 100 200 300 400 0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Figure 4: Each plot represents how well selected words fit assumptions about their lexical semantics: the leftmost plot ecke (corner) yields higher probabilities as objects are closer to the corner; the middle plot gr¨un (green) yields higher probabilities when the colour spectrum values are nearer to green; the rightmost plot ¨uber (above) yields higher probabilities when targets are nearer to a landmark set in the middle. Figure is as expected. There is a slight increase between 6-7, though very small (a difference of 0.09). Overall, these results seem to show that our model indeed works intersectively and “zooms in” on the intended referent. 4.1 Further Analysis Analysis of Selected Words We analysed several individual word classifiers to determine how well their predictions match assumptions about their lexical semantics. For example, for the spatial word Ecke (corner), we would expect its classifier to return high probabilities if features related to an object’s position (e.g., x and y coordinates, distance from the center) are near corners of the scene. The leftmost plot in Figure 4 shows that this is indeed the case; by holding all non-position features constant and ranging over all points on the screen, we can see that the classifier gives high probabilities around the edges, particularly in the four corners, and very low probabilities in the middle region. Similarly for the colour word gr¨un, the centre plot in Figure 4 (overlaid with a colour spectrum) shows high probabilities are given when presented with the colour green, as expected. Similarly, for the relational word ¨uber (above), by treating the center point as the landmark and ranging over all other points on the plot for the target, the ¨uber classifier gives high probabilities when directly above the center point, with linear negative growth as the distance from the landmark increases. Note that we selected the type of feature to vary here for presentation; all classifiers get the full feature set and learn automatically to “ignore” the irrelevant features (e.g., that for gr¨un does not respond to variations in positional features). They do this wuite well, but we noticed some ‘blurring’, due to not all combinations of colours and shape being represented in the objects in the training set. Analysis of Incremental Processing Figure 6 finally shows the interpretation of the RE in Example (2) in the scene from Figure 2. The top row depicts the distribution over objects (true target shown in red) after the relation word unten (bottom) is uttered; the second row that for landmark objects, after the landmark description begins (dem gr¨unen m / the green m). The third row (target objects), ceases to change after the relational word is uttered, but continues again as additional target words are uttered (unten links / bottom left). While the true target is ranked highly already on the basis of the target SR alone, it is only when the relational information is added (top row) that it becomes argmax. Discussion We did not explore how well our model could handle generalised quantifiers, such as all (e.g., all the red objects) or a specific number of objects (e.g., the two green Ts). We speculate that one could see as the contribution of words such as all or two a change to how the distribution is evaluated (“return the n top candidates”). Our model also doesn’t yet directly handle more descriptive REs like the cross in the top-right corner on the left, as left is learned as a global term, or negation (the cross that’s not red). We leave exploring such constructions to future work. 5 Related Work Kelleher et al. (2005) approached RR using perceptually-grounded models, focusing on saliency and discourse context. In Gorniak and Roy (2004), descriptions of objects were used to learn a perceptually-grounded meaning with focus on spatial terms such as on the left. Steels and Belpaeme (2005) used neural networks to connect language with colour terms by interacting with humans. Larsson (2013) is closest in spirit to what we are attempting here; he provides a detailed 298 grauer stein über dem grünen m unten links Figure 6: A depiction of the model working incrementally for the RE in Example (2): the distribution over objects for relation is row 1, landmark is row 2, target is row 3. formal semantics for similarly descriptive terms, where parts of the semantics are modelled by a perceptual classifier. These approaches had limited lexicons (where we attempt to model all words in our corpus), and do not process incrementally, which we do here. Recent efforts in multimodal distributional semantics have also looked at modelling word meaning based on visual context. Originally, vector space distributional semantics focused words in the context of other words (Turney and Pantel, 2010); recent multimodal approaches also consider low-level features from images. Bruni et al. (2012) and Bruni et al. (2014) for example model word meaning by word and visual context; each modality is represented by a vector, fused by concatenation. Socher et al. (2014) and Kiros et al. (2014) present approaches where words/phrases and images are mapped into the same high-dimensional space. While these approaches similarly provide a link between words and images, they are typically tailored towards a different setting (the words being descriptions of the whole image, and not utterance intended to perform a function within a visual situation). We leave more detailed exploration of similarities and differences to future work and only note for now that our approach, relying on much simpler classifiers (log-linear, basically), works with much smaller data sets and additionally seem to provide an easier interface to more traditional ways of composition (see Section 3 above). The issue of semantic compositionality is also actively discussed in the distributional semantics literature (see, e.g., (Mitchell and Lapata, 2010; Erk, 2013; Lewis and Steedman, 2013; Paperno et al., 2014)), investigating how to combine vectors. This could be seen as composition on the level of intensions (if one sees distributional representations as intensions, as is variously hinted at, e.g. Erk (2013)). In our approach, composition is done on the extensional level (by interpolating distributions over candidate objects). We do not see our approach as being in opposition to these attempts. Rather, we envision a system of semantics that combines traditional symbolic expressions (on which inferences can be modelled via syntactic calculi) with distributed representations (which model conceptual knowledge / semantic networks, as well as encyclopedic knowledge) and with our action-based (namely, identification in the environment via perceptual information) semantics. This line of approach is connected to a number of recent works (e.g., (Erk, 2013; Lewis and Steedman, 2013; Larsson, 2013)); for now, exploring its ramifications is left for future work. 6 Conclusion In this paper, we presented a model of reference resolution that learns a perceptually-grounded meaning of words, including relational words. The model is simple, compositional, and robust despite low amounts of training data and noisy modalities. Our model is not without limitations; it so far only handles definite descriptions, yet there are other ways to refer to real-world objects, such as via pronouns and deixis. A unified model that can handle all of these, similar in spirit perhaps to Funakoshi et al. (2012), but with perceptual groundings, is left for future work. Our approach could also benefit from improved object segmentation and repre299 sentation. Our next steps with this model is to handle compositional structures without relying on our closed tag set (e.g., using a syntactic parser). We also plan to test our model in a natural, interactive dialogue system. Acknowledgements We want to thank the anonymous reviewers for their comments. We also want to thank Spyros Kousidis for helping with data collection, Livia Dia for help with the computer vision processing, and Julian Hough for fruitful discussions on semantics, though we can’t blame them for any problems of the work that may remain. This research/work was supported by the Cluster of Excellence Cognitive Interaction Technology ’CITEC’ (EXC 277) at Bielefeld University, which is funded by the German Research Foundation (DFG). References Elia Bruni, Gemma Boleda, Marco Baroni, and NamKhanh Tran. 2012. Distributional semantics in technicolor. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, volume 1, pages 136–145. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Herbert H Clark. 1996. Using Language, volume 23. Cambridge University Press. Nikos Engonopoulos, Martin Villalba, Ivan Titov, and Alexander Koller. 2013. Predicting the resolution of referring expressions from user behavior. In Proceedings of EMLNP, pages 1354–1359, Seattle, Washington, USA. Association for Computational Linguistics. Katrin Erk. 2013. Towards a semantics for distributional representations. In Proceedings of IWCS, pages 1–11, Potsdam, Germany. Charles J Fillmore. 1975. Pragmatics and the description of discourse. Radical pragmatics, pages 143– 166. Kotaro Funakoshi, Mikio Nakano, Takenobu Tokunaga, and Ryu Iida. 2012. A Unified Probabilistic Approach to Referring Expressions. In Proceedings of SIGDial, pages 237–246, Seoul, South Korea, July. Association for Computational Linguistics. L T F Gamut. 1991. Logic, Language and Meaning: Intensional Logic and Logical Grammar, volume 2. Chicago University Press, Chicago. Peter Gorniak and Deb Roy. 2004. Grounded semantic composition for visual scenes. Journal of Artificial Intelligence Research, 21:429–470. Stevan Harnad. 1990. The Symbol Grounding Problem. Physica D, 42:335–346. John Kelleher, Fintan Costello, and Jofsef Van Genabith. 2005. Dynamically structuring, updating and interrelating representations of visual and linguistic discourse context. Artificial Intelligence, 167(1–2):62–102. Casey Kennington, Spyros Kousidis, and David Schlangen. 2013. Interpreting Situated Dialogue Utterances: an Update Model that Uses Speech, Gaze, and Gesture Information. In Proceedings of SIGdial. Casey Kennington, Spyros Kousidis, and David Schlangen. 2014. Situated Incremental Natural Language Understanding using a Multimodal, Linguistically-driven Update Model. In Proceedings of CoLing. Casey Kennington, Livia Dia, and David Schlangen. 2015a. A Discriminative Model for PerceptuallyGrounded Incremental Reference Resolution. In Proceedings of IWCS. Association for Computational Linguistics. Casey Kennington, Ryu Iida, Takenobu Tokunaga, and David Schlangen. 2015b. Incrementally Tracking Reference in Human/Human Dialogue Using Linguistic and Extra-Linguistic Information. In NAACL, Denver, U.S.A. Association for Computational Linguistics. Ryan Kiros, Ruslan Salakhutdinov, and Richard S Zemel. 2014. Unifying Visual-Semantic Embeddings with Multimodal Neural Language Models. In Proceedings of NIPS 2014 Deep Learning Workshop, pages 1–13. Staffan Larsson. 2013. Formal semantics for perceptual classification. Journal of Logic and Computation. Mike Lewis and Mark Steedman. 2013. Combined Distributional and Logical Semantics. Transactions of the ACL, 1:179–192. Edward Loper and Steven Bird. 2002. NLTK: The natural language toolkit. In Proceedings of the ACL-02 Workshop on Effective tools and methodologies for teaching natural language processing and computational linguistics-Volume 1, pages 63–70. Association for Computational Linguistics. Cynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, and Dieter Fox. 2014. Learning from Unscripted Deictic Gesture and Language for Human-Robot Interactions. In AAAI. AAAI Press. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive science, 34(8):1388–1429, November. 300 Richard Montague. 1973. The Proper Treatment of Quantifikation in Ordinary English. In J Hintikka, J Moravcsik, and P Suppes, editors, Approaches to Natural Language: Proceedings of the 1970 Stanford Workshop on Grammar and Semantics, pages 221–242, Dordrecht. Reidel. Denis Paperno, Nghia The Pham, and Marco Baroni. 2014. A practical and linguistically-motivated approach to compositional distributional semantics. In Proceedings of ACL, pages 90–99. Barbara H Partee, Alice ter Meuelen, and Robert E Wall. 1993. Mathematical Methods in Linguistics. Kluwer Academic Publishers, Dordrecht. Kari Pulli, Anatoly Baksheev, Kirill Kornyakov, and Victor Eruhimov. 2012. Real-time computer vision with OpenCV. Communications of the ACM, 55(6):61–69. Richard Socher, Andrej Karpathy, Quoc V Le, Christopher D Manning, and Andrew Y Ng. 2014. Grounded Compositional Semantics for Finding and Describing Images with Sentences. Transactions of the Association for Computational Linguistics (TACL), 2:207–218. Luc Steels and Tony Belpaeme. 2005. Coordinating perceptually grounded categories through language: a case study for colour. The Behavioral and brain sciences, 28(4):469–489; discussion 489–529. Peter D Turney and Patrick Pantel. 2010. From Frequency to Meaning: Vector Space Models of Semantics. Artificial Intelligence, 37(1):141–188. 301
2015
29
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 20–30, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Encoding Source Language with Convolutional Neural Network for Machine Translation Fandong Meng1 Zhengdong Lu2 Mingxuan Wang1 Hang Li2 Wenbin Jiang1 Qun Liu3,1 1Key Laboratory of Intelligent Information Processing, Institute of Computing Technology, Chinese Academy of Sciences {mengfandong,wangmingxuan,jiangwenbin,liuqun}@ict.ac.cn 2Noah’s Ark Lab, Huawei Technologies {Lu.Zhengdong,HangLi.HL}@huawei.com 3ADAPT Centre, School of Computing, Dublin City University Abstract The recently proposed neural network joint model (NNJM) (Devlin et al., 2014) augments the n-gram target language model with a heuristically chosen source context window, achieving state-of-the-art performance in SMT. In this paper, we give a more systematic treatment by summarizing the relevant source information through a convolutional architecture guided by the target information. With different guiding signals during decoding, our specifically designed convolution+gating architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target language words, are fed to a deep neural network (DNN) to form a stronger NNJM. Experiments on two NIST Chinese-English translation tasks show that the proposed model can achieve significant improvements over the previous NNJM by up to +1.08 BLEU points on average. 1 Introduction Learning of continuous space representation for source language has attracted much attention in both traditional statistical machine translation (SMT) and neural machine translation (NMT). Various models, mostly neural network-based, have been proposed for representing the source sentence, mainly as the encoder part in an encoder-decoder framework (Bengio et al., 2003; Auli et al., 2013; Kalchbrenner and Blunsom, 2013; Cho et al., 2014; Sutskever et al., 2014). There has been some quite recent work on encoding only “relevant” part of source sentence during the decoding process, most notably neural network joint model (NNJM) in (Devlin et al., 2014), which extends the n-grams target language model by additionally taking a fixed-length window of source sentence, achieving state-of-the-art performance in statistical machine translation. In this paper, we propose novel convolutional architectures to dynamically encode the relevant information in the source language. Our model covers the entire source sentence, but can effectively find and properly summarize the relevant parts, guided by the information from the target language. With the guiding signals during decoding, our specifically designed convolution architectures can pinpoint the parts of a source sentence that are relevant to predicting a target word, and fuse them with the context of entire source sentence to form a unified representation. This representation, together with target words, are fed to a deep neural network (DNN) to form a stronger NNJM. Since our proposed joint model is purely lexicalized, it can be integrated into any SMT decoder as a feature. Two variants of the joint model are also proposed, with coined name tagCNN and inCNN, with different guiding signals used from the decoding process. We integrate the proposed joint models into a state-of-the-art dependency-to-string translation system (Xie et al., 2011) to evaluate their effectiveness. Experiments on NIST Chinese-English translation tasks show that our model is able to achieve significant improvements of +2.0 BLEU points on average over the baseline. Our model also outperforms Devlin et al. (2014)’s NNJM by up to +1.08 BLEU points. 20 (a) tagCNN (b) inCNN Figure 1: Illustration for joint LM based on CNN encoder. RoadMap: In the remainder of this paper, we start with a brief overview of joint language model in Section 2, while the convolutional encoders, as the key component of which, will be described in detail in Section 3. Then in Section 4 we discuss the decoding algorithm with the proposed models. The experiment results are reported in Section 5, followed by Section 6 and 7 for related work and conclusion. 2 Joint Language Model Our joint model with CNN encoders can be illustrated in Figure 1 (a) & (b), which consists 1) a CNN encoder, namely tagCNN or inCNN, to represent the information in the source sentences, and 2) an NN-based model for predicting the next words, with representations from CNN encoders and the history words in target sentence as inputs. In the joint language model, the probability of the target word en, given previous k target words {en−k, · · ·, en−1} and the representations from CNN-encoders for source sentence S are tagCNN: p(en|φ1(S, {a(en)}), {e}n−1 n−k) inCNN: p(en| φ2(S, h({e}n−1 n−k)), {e}n−1 n−k), where φ1(S, {a(en)}) stands for the representation given by tagCNN with the set of indexes {a(en)} of source words aligned to the target word en, and φ2(S, h({e}n−1 n−k)) stands for the representation from inCNN with the attention signal h({e}n−1 n−k). Let us use the example in Figure 1, where the task is to translate the Chinese sentence into English. In evaluating a target language sequence “holds parliament and presidential”, with “holds parliament and” as the proceeding words (assume 4-gram LM), and the affiliated source word1 of “presidential” being “Zˇongtˇong” (determined by word alignment), tagCNN generates φ1(S, {4}) (the index of “Zˇongtˇong” is 4), and inCNN generates φ2(S, h(holds parliament and)). The DNN component then takes "holds parliament and" and (φ1 or φ2) as input to give the conditional probability for next word, e.g., p("presidential"|φ1|2, {holds, parliament, and}). 3 Convolutional Models We start with the generic architecture for convolutional encoder, and then proceed to tagCNN and inCNN as two extensions. 1For an aligned target word, we take its aligned source words as its affiliated source words. And for an unaligned word, we inherit its affiliation from the closest aligned word, with preference given to the right (Devlin et al., 2014). Since the word alignment is of many-to-many, one target word may has multi affiliated source words. 21 Figure 2: Illustration for the CNN encoders. 3.1 Generic CNN Encoder The basic architecture is of a generic CNN encoder is illustrated in Figure 2 (a), which has a fixed architecture consisting of six layers: Layer-0: the input layer, which takes words in the form of embedding vectors. In our work, we set the maximum length of sentences to 40 words. For sentences shorter than that, we put zero padding at the beginning of sentences. Layer-1: a convolution layer after Layer-0, with window size = 3. As will be discussed in Section 3.2 and 3.3, the guiding signal are injected into this layer for “guided version”. Layer-2: a local gating layer after Layer1, which simply takes a weighted sum over feature-maps in non-adjacent window with size = 2. Layer-3: a convolution layer after Layer-2, we perform another convolution with window size = 3. Layer-4: we perform a global gating over feature-maps on Layer-3. Layer-5: fully connected weights that maps the output of Layer-4 to this layer as the final representation. 3.1.1 Convolution As shown in Figure 2 (a), the convolution in Layer-1 operates on sliding windows of words (width k1), and the similar definition of windows carries over to higher layers. Formally, for source sentence input x = {x1, · · · , xN}, the convolution unit for feature map of type-f (among Fℓof them) on Layer-ℓis z(ℓ,f) i (x) = σ(w(ℓ,f)ˆz(ℓ−1) i + b(ℓ,f)), ℓ= 1, 3, f = 1, 2, · · · , Fℓ (1) where • z(ℓ,f) i (x) gives the output of feature map of type-f for location i in Layer-ℓ; • w(ℓ,f) is the parameters for f on Layer-ℓ; • σ(·) is the Sigmoid activation function; • ˆz(ℓ−1) i denotes the segment of Layer-ℓ−1 for the convolution at location i , while ˆz(0) i def = [x⊤ i , x⊤ i+1, x⊤ i+2]⊤ concatenates the vectors for 3 words from sentence input x. 3.1.2 Gating Previous CNNs, including those for NLP tasks (Hu et al., 2014; Kalchbrenner et al., 2014), take a straightforward convolutionpooling strategy, in which the “fusion” decisions (e.g., selecting the largest one in maxpooling) are based on the values of featuremaps. This is essentially a soft template matching, which works for tasks like classification, but harmful for keeping the composition functionality of convolution, which is critical for modeling sentences. In this paper, we propose to use separate gating unit to release the score function duty from the convolution, and let it focus on composition. 22 We take two types of gating: 1) for Layer2, we take a local gating with non-overlapping windows (size = 2) on the feature-maps of convolutional Layer-1 for representation of segments, and 2) for Layer-4, we take a global gating to fuse all the segments for a global representation. We found that this gating strategy can considerably improve the performance of both tagCNN and inCNN over pooling. • Local Gating: On Layer-1, for every gating window, we first find its original input (before convolution) on Layer-0, and merge them for the input of the gating network. For example, for the two windows: word (3,4,5) and word (4,5,6) on Layer-0, we use concatenated vector consisting of embedding for word (3,4,5,6) as the input of the local gating network (a logistic regression model) to determine the weight for the convolution result of the two windows (on Layer-1), and the weighted sum are the output of Layer-2. • Global Gating: On Layer-3, for featuremaps at each location i, denoted z(3) i , the global gating network (essentially softmax, parameterized wg), assigns a normalized weight ω(z(3) i ) = ew⊤ g z(3) i / X j ew⊤ g z(3) j , and the gated representation on Layer4 is given by the weighted sum P i ω(z(3) i )z(3) i . 3.1.3 Training of CNN encoders The CNN encoders, including tagCNN and inCNN that will be discussed right below, are trained in a joint language model described in Section 2, along with the following parameters • the embedding of the words on source and the proceeding words on target; • the parameters for the DNN of joint language model, include the parameters of soft-max for word probability. The training procedure is identical to that of neural network language model, except that the parallel corpus is used instead of a monolingual corpus. We seek to maximize the loglikelihood of training samples, with one sample for every target word in the parallel corpus. Optimization is performed with the conventional back-propagation, implemented as stochastic gradient descent (LeCun et al., 1998) with mini-batches. 3.2 tagCNN tagCNN inherits the convolution and gating from generic CNN (as described in Section 3.1), with the only modification in the input layer. As shown in Figure 2 (b), in tagCNN, we append an extra tagging bit (0 or 1) to the embedding of words in the input layer to indicate whether it is one of affiliated words x(AFF) i = [x⊤ i 1]⊤, x(NON-AFF) j = [x⊤ j 0]⊤. Those extended word embedding will then be treated as regular word-embedding in the convolutional neural network. This particular encoding strategy can be extended to embed more complicated dependency relation in source language, as will be described in Section 5.4. This particular “tag” will be activated in a parameterized way during the training for predicting the target words. In other words, the supervised signal from the words to predict will find, through layers of back-propagation, the importance of the tag bit in the “affiliated words” in the source language, and learn to put proper weight on it to make tagged words stand out and adjust other parameters in tagCNN accordingly for the optimal predictive performance. In doing so, the joint model can pinpoint the parts of a source sentence that are relevant to predicting a target word through the already learned word alignment. 3.3 inCNN Unlike tagCNN, which directly tells the location of affiliated words to the CNN encoder, inCNN sends the information about the proceeding words in target side to the convolutional encoder to help retrieve the information relevant for predicting the next word. This is essentially a particular case of attention model, analogous to the automatic alignment mechanism in (Bahdanau et al., 2014), where the at23 举行/VV 智利/NN 选举/NN 总统/NN 与/CC 国会/NN Chinese: 智利 举行 国会 与 总统 选举 English: Chile holds parliament and presidential elections 举行 智利 X1:NN (a) (b) Chile holds X1 举行 (c) holds Figure 3: Illustration for a dependency tree (a) with three head-dependents relations in shadow, an example of head-dependents relation rule (b) for the top level of (a), and an example of head rule (c). “X1:NN” indicates a substitution site that can be replaced by a subtree whose root has part-of-speech “NN”. The underline denotes a leaf node. tention signal is from the state of a generative recurrent neural network (RNN) as decoder. Basically, the information from proceeding words, denoted as h({e}n−1 n−k), is injected into every convolution window in the source language sentence, as illustrated in Figure 2 (c). More specifically, for the window indexed by t, the input to convolution is given by the concatenated vector ˆzt = [h({e}n−1 n−k), x⊤ t , x⊤ t+1, x⊤ t+2]⊤. In this work, we use a DNN to transform the vector concatenated from word-embedding for words {en−k · · · , en−k} into h({e}n−1 n−k), with sigmoid activation function. Through layers of convolution and gating, inCNN can 1) retrieve the relevant segments of source sentences, and 2) compose and transform the retrieved segments into representation recognizable by the DNN in predicting the words in target language. Different from that of tagCNN, inCNN uses information from proceeding words, hence provides complementary information in the augmented joint language model of tagCNN. This has been empirically verified when using feature based on tagCNN and that based on inCNN in decoding with greater improvement. 4 Decoding with the Joint Model Our joint model is purely lexicalized, and therefore can be integrated into any SMT decoders as a feature. For a hierarchical SMT decoder, we adopt the integrating method proposed by Devlin et al. (2014). As inherited from the n-gram language model for performing hierarchical decoding, the leftmost and rightmost n −1 words from each constituent should be stored in the state space. We extend the state space to also include the indexes of the affiliated source words for each of these edge words. For an aligned target word, we take its aligned source words as its affiliated source words. And for an unaligned word, we use the affiliation heuristic adopted by Devlin et al. (2014). In this paper, we integrate the joint model into the state-of-the-art dependency-to-string machine translation decoder as a case study to test the efficacy of our proposed approaches. We will briefly describe the dependency-to-string translation model and then the description of MT system. 4.1 Dependency-to-String Translation In this paper, we use a state-of-the-art dependency-to-string (Xie et al., 2011) decoder (Dep2Str), which is also a hierarchical decoder. This dependency-to-string model employs rules that represent the source side as head-dependents relations and the target side as strings. A head-dependents relation (HDR) is composed of a head and all its dependents in dependency trees. Figure 3 shows a dependency tree (a) with three HDRs (in shadow), 24 an example of HDR rule (b) for the top level of (a), and an example of head rule (c). HDR rules are constructed from head-dependents relations. HDR rules can act as both translation rules and reordering rules. And head rules are used for translating source words. We adopt the decoder proposed by Meng et al. (2013) as a variant of Dep2Str translation that is easier to implement with comparable performance. Basically they extract the HDR rules with GHKM (Galley et al., 2004) algorithm. For the decoding procedure, given a source dependency tree T, the decoder transverses T in post-order. The bottomup chart-based decoding algorithm with cube pruning (Chiang, 2007; Huang and Chiang, 2007) is used to find the k-best items for each node. 4.2 MT Decoder Following Och and Ney (2002), we use a general loglinear framework. Let d be a derivation that convert a source dependency tree into a target string e. The probability of d is defined as: P(d) ∝ Y i φi(d)λi (2) where φi are features defined on derivations and λi are the corresponding weights. Our decoder contains the following features: Baseline Features: • translation probabilities P(t|s) and P(s|t) of HDR rules; • lexical translation probabilities PLEX(t|s) and PLEX(s|t) of HDR rules; • rule penalty exp(−1); • pseudo translation rule penalty exp(−1); • target word penalty exp(|e|); • n-gram language model PLM(e); Proposed Features: • n-gram tagCNN joint language model PTLM(e); • n-gram inCNN joint language model PILM(e). Our baseline decoder contains the first eight features. The pseudo translation rule (constructed according to the word order of a HDR) is to ensure the complete translation when no matched rules is found during decoding. The weights of all these features are tuned via minimum error rate training (MERT) (Och, 2003). For the dependency-to-string decoder, we set rule-threshold and stack-threshold to 10−3, rule-limit to 100, stack-limit to 200. 5 Experiments The experiments in this Section are designed to answer the following questions: 1. Are our tagCNN and inCNN joint language models able to improve translation quality, and are they complementary to each other? 2. Do inCNN and tagCNN benefit from their guiding signal, compared to a generic CNN? 3. For tagCNN, is it helpful to embed more dependency structure, e.g., dependency head of each affiliated word, as additional information? 4. Can our gating strategy improve the performance over max-pooling? 5.1 Setup Data: Our training data are extracted from LDC data2. We only keep the sentence pairs that the length of source part no longer than 40 words, which covers over 90% of the sentence. The bilingual training data consist of 221K sentence pairs, containing 5.0 million Chinese words and 6.8 million English words. The development set is NIST MT03 (795 sentences) and test sets are MT04 (1499 sentences) and MT05 (917 sentences) after filtering with length limit. Preprocessing: The word alignments are obtained with GIZA++ (Och and Ney, 2003) on the corpora in both directions, using the “growdiag-final-and” balance strategy (Koehn et al., 2003). We adopt SRI Language Modeling 2The corpora include LDC2002E18, LDC2003E07, LDC2003E14, LDC2004T07, LDC2005T06. 25 Systems MT04 MT05 Average Moses 34.33 31.75 33.04 Dep2Str 34.89 32.24 33.57 + BBN-JM (Devlin et al., 2014) 36.11 32.86 34.49 + CNN (generic) 36.12* 33.07* 34.60 + tagCNN 36.33* 33.37* 34.85 + inCNN 36.92* 33.72* 35.32 + tagCNN + inCNN 36.94* 34.20* 35.57 Table 1: BLEU-4 scores (%) on NIST MT04-test and MT05-test, of Moses (default settings), dependency-to-string baseline system (Dep2Str), and different features on top of Dep2Str: neural network joint model (BBN-JM), generic CNN, tagCNN, inCNN and the combination of tagCNN and inCNN. The boldface numbers and superscript ∗indicate that the results are significantly better (p<0.01) than those of the BBN-JM and the Dep2Str baseline respectively. “+” stands for adding the corresponding feature to Dep2Str. Toolkit (Stolcke and others, 2002) to train a 4-gram language model with modified KneserNey smoothing on the Xinhua portion of the English Gigaword corpus (306 million words). We parse the Chinese sentences with Stanford Parser into projective dependency trees. Optimization of NN: In training the neural network, we limit the source and target vocabulary to the most frequent 20K words for both Chinese and English, covering approximately 97% and 99% of two corpus respectively. All the out-of-vocabulary words are mapped to a special token UNK. We used stochastic gradient descent to train the joint model, setting the size of minibatch to 500. All joint models used a 3word target history (i.e., 4-gram LM). The dimension of word embedding and the attention signal h({e}n−1 n−k) for inCNN are 100. For the convolution layers (Layer 1 and Layer 3), we apply 100 filters. And the final representation of CNN encoders is a vector with dimension 100. The final DNN layer of our joint model is the standard multi-layer perceptron with softmax at the top layer. Metric: We use the case-insensitive 4gram NIST BLEU3 as our evaluation metric, with statistical significance test with signtest (Collins et al., 2005) between the proposed models and two baselines. 3ftp://jaguar.ncsl.nist.gov/mt/ resources/mteval-v11b.pl 5.2 Setting for Model Comparisons We use the tagCNN and inCNN joint language models as additional decoding features to a dependency-to-string baseline system (Dep2Str), and compare them to the neural network joint model with 11 source context words (Devlin et al., 2014). We use the implementation of an open source toolkit4 with default configuration except the global settings described in Section 5.1. Since our tagCNN and inCNN models are source-totarget and left-to-right (on target side), we only take the source-to-target and left-to-right type NNJM in (Devlin et al., 2014) in comparison. We call this type NNJM as BBN-JM hereafter. Although the BBN-JM in (Devlin et al., 2014) is originally tested in the hierarchical phrase-based (Chiang, 2007) SMT and stringto-dependency (Shen et al., 2008) SMT, it is fairly versatile and can be readily integrated into Dep2Str. 5.3 The Main Results The main results of different models are given in Table 1. Before proceeding to more detailed comparison, we first observe that • the baseline Dep2Str system gives BLEU 0.5+ higher than the open-source phrasebased system Moses (Koehn et al., 2007); • BBN-JM can give about +0.92 BLEU score over Dep2Str, a result similar as reported in (Devlin et al., 2014). 4http://nlg.isi.edu/software/nplm/ 26 Systems MT04 MT05 Average Dep2str 34.89 32.24 33.57 +tagCNN 36.33 33.37 34.85 +tagCNN dep 36.54 33.61 35.08 Table 2: BLEU-4 scores (%) of tagCNN model with dependency head words as additional tags (tagCNN dep). Clearly from Table 1, tagCNN and inCNN improve upon the Dep2Str baseline by +1.28 and +1.75 BLEU, outperforming BBN-JM in the same setting by respectively +0.36 and +0.83 BLEU, averaged on NIST MT04 and MT05. These indicate that tagCNN and inCNN can individually provide discriminative information in decoding. It is worth noting that inCNN appears to be more informative than the affiliated words suggested by the word alignment (GIZA++). We conjecture that this is due to the following two facts • inCNN avoids the propagation of mistakes and artifacts in the already learned word alignment; • the guiding signal in inCNN provides complementary information to evaluate the translation. Moreover, when tagCNN and inCNN are both used in decoding, it can further increase its winning margin over BBN-JM to +1.08 BLEU points (in the last row of Table 1), indicating that the two models with different guiding signals are complementary to each other. The Role of Guiding Signal It is slight surprising that the generic CNN can also achieve the gain on BLEU similar to that of BBNJM, since intuitively generic CNN encodes the entire sentence and the representations should in general far from optimal representation for joint language model. The reason, as we conjecture, is CNN yields fairly informative summarization of the sentence (thanks to its sophisticated convolution and gating architecture), which makes up some of its loss on resolution and relevant parts of the source senescence. That said, the guiding signal in both tagCNN and inCNN are crucial to the Systems MT04 MT05 Average Dep2Str 34.89 32.24 33.57 +inCNN 36.92 33.72 35.32 +inCNN-2-pooling 36.33 32.88 34.61 +inCNN-4-pooling 36.46 33.01 34.74 +inCNN-8-pooling 36.57 33.39 34.98 Table 3: BLEU-4 scores (%) of inCNN models implemented with gating strategy and k max-pooling, where k is of {2, 4, 8}. power of CNN-based encoder, as can be easily seen from the difference between the BLEU scores achieved by generic CNN, tagCNN, and inCNN. Indeed, with the signal from the already learned word alignment, tagCNN can gain +0.25 BLEU over its generic counterpart, while for inCNN with the guiding signal from the proceeding words in target, the gain is more saliently +0.72 BLEU. 5.4 Dependency Head in tagCNN In this section, we study whether tagCNN can further benefit from encoding richer dependency structure in source language in the input. More specifically, the dependency head words can be used to further improve tagCNN model. As described in Section 3.2, in tagCNN, we append a tagging bit (0 or 1) to the embedding of words in the input layer as tags on whether they are affiliated source words. To incorporate dependency head information, we extend the tagging rule in Section 3.2 to add another tagging bit (0 or 1) to the word-embedding for original tagCNN to indicate whether it is part of dependency heads of the affiliated words. For example, if xi is the embedding of an affiliated source word and xj the dependency head of word xi, the extended input of tagCNN would contain x(AFF, NON-HEAD) i = [x⊤ i 1 0]⊤ x(NON-AFF, HEAD) j = [x⊤ j 0 1]⊤ If the affiliated source word is the root of a sentence, we only append 0 as the second tagging bit since the root has no dependency head. From Table 2, with the help of dependency head information, we can improve tagCNN by +0.23 BLEU points averagely on two test sets. 27 5.5 Gating Vs. Max-pooling In this section, we investigate to what extent that our gating strategy can improve the translation performance over max pooling, with the comparisons on inCNN model as a case study. For implementation of inCNN with maxpooling, we replace the local-gating (Layer-2) with max-pooling with size 2 (2-pooling for short), and global gating (Layer-4) with k maxpooling (“k-pooling”), where k is of {2, 4, 8}. Then, we use the mean of the outputs of kpooling as the final input of Layer-5. In doing so, we can guarantee the input dimension of Layer-5 is the same as the architecture with gating. From Table 3, we can clearly see that our gating strategy can improve translation performance over max-pooling by 0.34∼0.71 BLEU points. Moreover, we find 8-pooling yields performance better than 2-pooling. We conjecture that this is because the useful relevant parts for translation are mainly concentrated on a few words of the source sentence, which can be better extracted with a larger pool size. 6 Related Work The seminal work of neural network language model (NNLM) can be traced to Bengio et al. (2003) on monolingual text. It is recently extended by Devlin et al. (2014) to include additional source context (11 source words) in modeling the target sentence, which is clearly most related to our work, with however two important differences: 1) instead of the ad hoc way of selecting a context window in (Devlin et al., 2014), our model covers the entire source sentence and automatically distill the context relevant for target modeling; 2) our convolutional architecture can effectively leverage guiding signals of vastly different forms and nature from the target. Prior to our model there is also work on representing source sentences with neural networks, including RNN (Cho et al., 2014; Sutskever et al., 2014) and CNN (Kalchbrenner and Blunsom, 2013). These work typically aim to map the entire sentence to a vector, which will be used later by RNN/LSTMbased decoder to generate the target sentence. As demonstrated in Section 5, the representation learnt this way cannot pinpoint the relevant parts of the source sentences (e.g., words or phrases level) and therefore is inferior to be directly integrated into traditional SMT decoders. Our model, especially inCNN, is inspired by is the automatic alignment model proposed in (Bahdanau et al., 2014). As the first effort to apply attention model to machine translation, it sends the state of a decoding RNN as attentional signal to the source end to obtain a weighted sum of embedding of source words as the summary of relevant context. In contrast, inCNN uses 1) a different attention signal extracted from proceeding words in partial translations, and 2) more importantly, a convolutional architecture and therefore a highly nonlinear way to retrieve and summarize the relevant information in source. 7 Conclusion and Future Work We proposed convolutional architectures for obtaining a guided representation of the entire source sentence, which can be used to augment the n-gram target language model. With different guiding signals from target side, we devise tagCNN and inCNN, both of which are tested in enhancing a dependency-to-string SMT with +2.0 BLEU points over baseline and +1.08 BLEU points over the state-of-the-art in (Devlin et al., 2014). For future work, we will consider encoding more complex linguistic structures to further enhance the joint model. Acknowledgments Meng, Wang, Jiang and Liu are supported by National Natural Science Foundation of China (Contract 61202216). Liu is partially supported by the Science Foundation Ireland (Grant 12/CE/I2267 and 13/RC/2106) as part of the ADAPT Centre at Dublin City University. We sincerely thank the anonymous reviewers for their thorough reviewing and valuable suggestions. References [Auli et al.2013] Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 28 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044–1054, Seattle, Washington, USA, October. [Bahdanau et al.2014] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473. [Bengio et al.2003] Yoshua Bengio, Rjean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal OF Machine Learning Research, 3:1137–1155. [Chiang2007] David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. [Cho et al.2014] Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder–decoder for statistical machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724– 1734, Doha, Qatar, October. [Collins et al.2005] Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, pages 531–540. [Devlin et al.2014] Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1370–1380, Baltimore, Maryland, June. [Galley et al.2004] Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule. In Proceedings of HLT/NAACL, volume 4, pages 273–280. Boston. [Hu et al.2014] Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In NIPS. [Huang and Chiang2007] Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Annual Meeting-Association For Computational Linguistics, volume 45, pages 144–151. [Kalchbrenner and Blunsom2013] Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA, October. [Kalchbrenner et al.2014] Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. ACL. [Klein and Manning2002] Dan Klein and Christopher D Manning. 2002. Fast exact inference with a factored model for natural language parsing. In Advances in neural information processing systems, volume 15, pages 3–10. [Koehn et al.2003] Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrasebased translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology-Volume 1, pages 48–54. [Koehn et al.2007] Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 177–180, Prague, Czech Republic, June. [LeCun et al.1998] Y. LeCun, L. Bottou, G. Orr, and K. Muller. 1998. Efficient backprop. In Neural Networks: Tricks of the trade. Springer. [Meng et al.2013] Fandong Meng, Jun Xie, Linfeng Song, Yajuan L¨u, and Qun Liu. 2013. Translation with source constituency and dependency trees. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1066–1076, Seattle, Washington, USA, October. [Och and Ney2002] Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 295–302. [Och and Ney2003] Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational linguistics, 29(1):19–51. [Och2003] Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics-Volume 1, pages 160–167. 29 [Shen et al.2008] Libin Shen, Jinxi Xu, and Ralph Weischedel. 2008. A new string-to-dependency machine translation algorithm with a target dependency language model. In Proceedings of ACL-08: HLT, pages 577–585. [Stolcke and others2002] Andreas Stolcke et al. 2002. Srilm-an extensible language modeling toolkit. In Proceedings of the international conference on spoken language processing, volume 2, pages 901–904. [Sutskever et al.2014] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. CoRR, abs/1409.3215. [Xie et al.2011] Jun Xie, Haitao Mi, and Qun Liu. 2011. A novel dependency-to-string model for statistical machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 216–226. 30
2015
3
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 302–312, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Neural CRF Parsing Greg Durrett and Dan Klein Computer Science Division University of California, Berkeley {gdurrett,klein}@cs.berkeley.edu Abstract This paper describes a parsing model that combines the exact dynamic programming of CRF parsing with the rich nonlinear featurization of neural net approaches. Our model is structurally a CRF that factors over anchored rule productions, but instead of linear potential functions based on sparse features, we use nonlinear potentials computed via a feedforward neural network. Because potentials are still local to anchored rules, structured inference (CKY) is unchanged from the sparse case. Computing gradients during learning involves backpropagating an error signal formed from standard CRF sufficient statistics (expected rule counts). Using only dense features, our neural CRF already exceeds a strong baseline CRF model (Hall et al., 2014). In combination with sparse features, our system1 achieves 91.1 F1 on section 23 of the Penn Treebank, and more generally outperforms the best prior single parser results on a range of languages. 1 Introduction Neural network-based approaches to structured NLP tasks have both strengths and weaknesses when compared to more conventional models, such conditional random fields (CRFs). A key strength of neural approaches is their ability to learn nonlinear interactions between underlying features. In the case of unstructured output spaces, this capability has led to gains in problems ranging from syntax (Chen and Manning, 2014; Belinkov et al., 2014) to lexical semantics (Kalchbrenner et al., 2014; Kim, 2014). Neural methods are also powerful tools in the case of structured 1System available at http://nlp.cs.berkeley.edu output spaces. Here, past work has often relied on recurrent architectures (Henderson, 2003; Socher et al., 2013; ˙Irsoy and Cardie, 2014), which can propagate information through structure via realvalued hidden state, but as a result do not admit efficient dynamic programming (Socher et al., 2013; Le and Zuidema, 2014). However, there is a natural marriage of nonlinear induced features and efficient structured inference, as explored by Collobert et al. (2011) for the case of sequence modeling: feedforward neural networks can be used to score local decisions which are then “reconciled” in a discrete structured modeling framework, allowing inference via dynamic programming. In this work, we present a CRF constituency parser based on these principles, where individual anchored rule productions are scored based on nonlinear features computed with a feedforward neural network. A separate, identicallyparameterized replicate of the network exists for each possible span and split point. As input, it takes vector representations of words at the split point and span boundaries; it then outputs scores for anchored rules applied to that span and split point. These scores can be thought of as nonlinear potentials analogous to linear potentials in conventional CRFs. Crucially, while the network replicates are connected in a unified model, their computations factor along the same substructures as in standard CRFs. Prior work on parsing using neural network models has often sidestepped the problem of structured inference by making sequential decisions (Henderson, 2003; Chen and Manning, 2014; Tsuboi, 2014) or by doing reranking (Socher et al., 2013; Le and Zuidema, 2014); by contrast, our framework permits exact inference via CKY, since the model’s structured interactions are purely discrete and do not involve continuous hidden state. Therefore, we can exploit a neural net’s capacity to learn nonlinear features without modifying 302 S NP VP DT NNP VBZ NP … W The Fed issued Structured inference (discrete) Feature extraction (continuous) fo h φ fw v(fw) Figure 1: Neural CRF model. On the right, each anchored rule (r, s) in the tree is independently scored by a function φ, so we can perform inference with CKY to compute marginals or the Viterbi tree. On the left, we show the process for scoring an anchored rule with neural features: words in fw (see Figure 2) are embedded, then fed through a neural network with one hidden layer to compute dense intermediate features, whose conjunctions with sparse rule indicator features fo are scored according to parameters W. our core inference mechanism, allowing us to use tricks like coarse pruning that make inference efficient in the purely sparse model. Our model can be trained by gradient descent exactly as in a conventional CRF, with the gradient of the network parameters naturally computed by backpropagating a difference of expected anchored rule counts through the network for each span and split point. Using dense learned features alone, the neural CRF model obtains high performance, outperforming the CRF parser of Hall et al. (2014). When sparse indicators are used in addition, the resulting model gets 91.1 F1 on section 23 of the Penn Treebank, outperforming the parser of Socher et al. (2013) as well as the Berkeley Parser (Petrov and Klein, 2007) and matching the discriminative parser of Carreras et al. (2008). The model also obtains the best single parser results on nine other languages, again outperforming the system of Hall et al. (2014). 2 Model Figure 1 shows our neural CRF model. The model decomposes over anchored rules, and it scores each of these with a potential function; in a standard CRF, these potentials are typically linear functions of sparse indicator features, whereas reflected the flip side of the Stoltzman personality . reflected the side of personality . i j k [[PreviousWord = reflected]], [[SpanLength = 7]], … fs NP PP NP r = NP NP PP ! fw v(fw) Figure 2: Example of an anchored rule production for the rule NP →NP PP. From the anchoring s = (i, j, k), we extract either sparse surface features fs or a sequence of word indicators fw which are embedded to form a vector representation v(fw) of the anchoring’s lexical properties. in our approach they are nonlinear functions of word embeddings.2 Section 2.1 describes our notation for anchored rules, and Section 2.2 talks about how they are scored. We then discuss specific choices of our featurization (Section 2.3) and the backbone grammar used for structured inference (Section 2.4). 2.1 Anchored Rules The fundamental units that our parsing models consider are anchored rules. As shown in Figure 2, we define an anchored rule as a tuple (r, s), where r is an indicator of the rule’s identity and s = (i, j, k) indicates the span (i, k) and split point j of the rule.3 A tree T is simply a collection of anchored rules subject to the constraint that those rules form a tree. All of our parsing models are CRFs that decompose over anchored rule productions and place a probability distribution over trees conditioned on a sentence w as follows: P(T|w) ∝exp  X (r,s)∈T φ(w, r, s)   2Throughout this work, we will primarily consider two potential functions: linear functions of sparse indicators and nonlinear neural networks over dense, continuous features. Although other modeling choices are possible, these two points in the design space reflect common choices in NLP, and past work has suggested that nonlinear functions of indicators or linear functions of dense features may perform less well (Wang and Manning, 2013). 3For simplicity of exposition, we ignore unary rules; however, they are easily supported in this framework by simply specifying a null value for the split point. 303 where φ is a scoring function that considers the input sentence and the anchored rule in question. Figure 1 shows this scoring process schematically. As we will see, the module on the left can be be a neural net, a linear function of surface features, or a combination of the two, as long as it provides anchored rule scores, and the structured inference component is the same regardless (CKY). A PCFG estimated with maximum likelihood has φ(w, r, s) = log P(r|parent(r)), which is independent of the anchoring s and the words w except for preterminal productions; a basic discriminative parser might let this be a learned parameter but still disregard the surface information. However, surface features can capture useful syntactic cues (Finkel et al., 2008; Hall et al., 2014). Consider the example in Figure 2: the proposed parent NP is preceded by the word reflected and followed by a period, which is a surface context characteristic of NPs or PPs in object position. Beginning with the and ending with personality are typical properties of NPs as well, and the choice of the particular rule NP →NP PP is supported by the fact that the proposed child PP begins with of. This information can be captured with sparse features (fs in Figure 2) or, as we describe below, with a neural network taking lexical context as input. 2.2 Scoring Anchored Rules Following Hall et al. (2014), our baseline sparse scoring function takes the following bilinear form: φsparse(w, r, s; W) = fs(w, s)⊤Wfo(r) where fo(r) ∈{0, 1}no is a sparse vector of features expressing properties of r (such as the rule’s identity or its parent label) and fs(w, s) ∈ {0, 1}ns is a sparse vector of surface features associated with the words in the sentence and the anchoring, as shown in Figure 2. W is a ns × no matrix of weights.4 The scoring of a particular anchored rule is depicted in Figure 3a; note that surface features and rule indicators are conjoined in a systematic way. The role of fs can be equally well played by a vector of dense features learned via a neural net4A more conventional expression of the scoring function for a CRF is φ(w, r, s) = θ⊤f(w, r, s), with a vector θ for the parameters and a single feature extractor f that jointly inspects the surface and the rule. However, when the feature representation conjoins each rule r with surface properties of the sentence in a systematic way (an assumption that holds in our case as well as for standard CRF models for POS tagging and NER), this is equivalent to our formalism. fo W fo W fs Wij = w eight([[fs,i ^fo,j]]) a) b) fw v(fw) h φ = f > s Wfo φ = g(Hv(fw))>Wfo Figure 3: Our sparse (left) and neural (right) scoring functions for CRF parsing. fs and fw are raw surface feature vectors for the sparse and neural models (respectively) extracted over anchored spans with split points. (a) In the sparse case, we multiply fs by a weight matrix W and then a sparse output vector fo to score the rule production. (b) In the neural case, we first embed fw and then transform it with a one-layer neural network in order to produce an intermediate feature representation h before combining with W and fo. work. We will now describe how to compute these features, which represent a transformation of surface lexical indicators fw. Define fw(w, s) ∈Nnw to be a function that produces a fixed-length sequence of word indicators based on the input sentence and the anchoring. This vector of word identities is then passed to an embedding function v : N →Rne and the dense representations of the words are subsequently concatenated to form a vector we denote by v(fw).5 Finally, we multiply this by a matrix H ∈Rnh×(nwne) of realvalued parameters and pass it through an elementwise nonlinearity g(·). We use rectified linear units g(x) = max(x, 0) and discuss this choice more in Section 6. Replacing fs with the end result of this computation h(w, s; H) = g(Hv(fw(w, s))), our scoring function becomes φneural(w, r, s; H, W) = h(w, s; H)⊤Wfo(r) as shown in Figure 3b. For a fixed H, this model can be viewed as a basic CRF with dense input features. By learning H, we learn intermediate feature representations that provide the model with 5Embedding words allows us to use standard pre-trained vectors more easily and tying embeddings across word positions substantially reduces the number of model parameters. However, embedding features rather than words has also been shown to be effective (Chen et al., 2014). 304 more discriminating power. Also note that it is possible to use deeper networks or more sophisticated architectures here; we will return to this in Section 6. Our two models can be easily combined: φ(w, r, s; W1, H, W2) = φsparse(w, r, s; W1) + φneural(w, r, s; H, W2) Weights for each component of the scoring function can be learned fully jointly and inference proceeds as before. 2.3 Features We take fs to be the set of features described in Hall et al. (2014). At the preterminal layer, the model considers prefixes and suffixes up to length 5 of the current word and neighboring words, as well as the words’ identities. For nonterminal productions, we fire indicators on the words6 before and after the start, end, and split point of the anchored rule (as shown in Figure 2) as well as on two other span properties, span length and span shape (an indicator of where capitalized words, numbers, and punctuation occur in the span). For our neural model, we take fw for all productions (preterminal and nonterminal) to be the words surrounding the beginning and end of a span and the split point, as shown in Figure 2; in particular, we look two words in either direction around each point of interest, meaning the neural net takes 12 words as input.7 For our word embeddings v, we use pre-trained word vectors from Bansal et al. (2014). We compare with other sources of word vectors in Section 5. Contrary to standard practice, we do not update these vectors during training; we found that doing so did not provide an accuracy benefit and slowed down training considerably. 2.4 Grammar Refinements A recurring issue in discriminative constituency parsing is the granularity of annotation in the base grammar (Finkel et al., 2008; Petrov and Klein, 2008; Hall et al., 2014). Using finer-grained symbols in our rules r gives the model greater capacity, but also introduces more parameters into W and 6The model actually uses the longest suffix of each word occurring at least 100 times in the training set, up to the entire word. Removing this abstraction of rare words harms performance. 7The sparse model did not benefit from using this larger neighborhood, so improvements from the neural net are not simply due to considering more lexical context. increases the ability to overfit. Following Hall et al. (2014), we use grammars with very little annotation: we use no horizontal Markovization for any of experiments, and all of our English experiments with the neural CRF use no vertical Markovization (V = 0). This also has the benefit of making the system much faster, due to the smaller state space for dynamic programming. We do find that using parent annotation (V = 1) is useful on other languages (see Section 7.2), but this is the only grammar refinement we consider. 3 Learning To learn weights for our neural model, we maximize the conditional log likelihood of our D training trees T ∗: L(H, W) = D X i=1 log P(T ∗ i |wi; H, W) Because we are using rectified linear units as our nonlinearity, our objective is not everywhere differentiable. The interaction of the parameters and the nonlinearity also makes the objective nonconvex. However, in spite of this, we can still follow subgradients to optimize this objective, as is standard practice. Recall that h(w, s; H) are the hidden layer activations. The gradient of W takes the standard form of log-linear models: ∂L ∂W =  X (r,s)∈T ∗ h(w, s; H)fo(r)⊤  −  X T P(T|w; H, W) X (r,s)∈T h(w, s; H)fo(r)⊤   Note that the outer products give matrices of feature counts isomorphic to W. The second expression can be simplified to be in terms of expected feature counts. To update H, we use standard backpropagation by first computing: ∂L ∂h =  X (r,s)∈T ∗ Wfo(r)  −  X T P(T|w; H, W) X (r,s)∈T Wfo(r)   Since h is the output of the neural network, we can then apply the chain rule to compute gradients for H and any other parameters in the neural network. 305 Learning uses Adadelta (Zeiler, 2012), which has been employed in past work (Kim, 2014). We found that Adagrad (Duchi et al., 2011) performed equally well with tuned regularization and step size parameters, but Adadelta worked better out of the box. We set the momentum term ρ = 0.95 (as suggested by Zeiler (2012)) and did not regularize the weights at all. We used a minibatch size of 200 trees, although the system was not particularly sensitive to this. For each treebank, we trained for either 10 passes through the treebank or 1000 minibatches, whichever is shorter. We initialized the output weight matrix W to zero. To break symmetry, the lower level neural network parameters H were initialized with each entry being independently sampled from a Gaussian with mean 0 and variance 0.01; Gaussian performed better than uniform initialization, but the variance was not important. 4 Inference Our baseline and neural model both score anchored rule productions. We can use CKY in the standard fashion to compute either expected anchored rule counts EP(T|w)[(r, s)] or the Viterbi tree arg maxT P(T|w). We speed up inference by using a coarse pruning pass. We follow Hall et al. (2014) and prune according to an X-bar grammar with headoutward binarization, ruling out any constituent whose max marginal probability is less than e−9. With this pruning, the number of spans and split points to be considered is greatly reduced; however, we still need to compute the neural network activations for each remaining span and split point, of which there may be thousands for a given sentence.8 We can improve efficiency further by noting that the same word will appear in the same position in a large number of span/split point combinations, and cache the contribution to the hidden layer caused by that word (Chen and Manning, 2014). Computing the hidden layer then simply requires adding nw vectors together and applying the nonlinearity, instead of a more costly matrix multiply. Because the number of rule indicators no is fairly large (approximately 4000 in the Penn Treebank), the multiplication by W in the model is also 8One reason we did not choose to include the rule identity fo as an input to the network is that it requires computing an even larger number of network activations, since we cannot reuse them across rules over the same span and split point. expensive. However, because only a small number of rules can apply to a given span and split point, fo is sparse and we can selectively compute the terms necessary for the final bilinear product. Our combined sparse and neural model trains on the Penn Treebank in 24 hours on a single machine with a parallelized CPU implementation. For reference, the purely sparse model with a parentannotated grammar (necessary for the best results) takes around 15 hours on the same machine. 5 System Ablations Table 1 shows results on section 22 (the development set) of the English Penn Treebank (Marcus et al., 1993), computed using evalb. Full test results and comparisons to other systems are shown in Table 4. We compare variants of our system along two axes: whether they use standard linear sparse features, nonlinear dense features from the neural net, or both, and whether any word representations (vectors or clusters) are used. Sparse vs. neural The neural CRF (line (d) in Table 1) on its own outperforms the sparse CRF (a, b) even when the sparse CRF has a more heavily annotated grammar. This is a surprising result: the features in the sparse CRF have been carefully engineered to capture a range of linguistic phenomena (Hall et al., 2014), and there is no guarantee that word vectors will capture the same. For example, at the POS tagging layer, the sparse model looks at prefixes and suffixes of words, which give the model access to morphology for predicting tags of unknown words, which typically have regular inflection patterns. By contrast, the neural model must rely on the geometry of the vector space exposing useful regularities. At the same time, the strong performance of the combination of the two systems (g) indicates that not only are both featurization approaches highperforming on their own, but that they have complementary strengths. Unlabeled data Much attention has been paid to the choice of word vectors for various NLP tasks, notably whether they capture more syntactic or semantic phenomena (Bansal et al., 2014; Levy and Goldberg, 2014). We primarily use vectors from Bansal et al. (2014), who train the skipgram model of Mikolov et al. (2013) using contexts from dependency links; a similar approach was also suggested by Levy and Goldberg (2014). 306 Sparse Neural V Word Reps F1 len ≤40 F1 all Hall et al. (2014), V = 1 90.5 a ✓ 0 89.89 89.22 b ✓ 1 90.82 90.13 c ✓ 1 Brown 90.80 90.17 d ✓ 0 Bansal 90.97 90.44 e ✓ 0 Collobert 90.25 89.63 f ✓ 0 PTB 89.34 88.99 g ✓ ✓ 0 Bansal 92.04 91.34 h ✓ ✓ 0 PTB 91.39 90.91 Table 1: Results of our sparse CRF, neural CRF, and combined parsing models on section 22 of the Penn Treebank. Systems are broken down by whether local potentials come from sparse features and/or the neural network (the primary contribution of this work), their level of vertical Markovization, and what kind of word representations they use. The neural CRF (d) outperforms the sparse CRF (a, b) even when a more heavily annotated grammar is used, and the combined approach (g) is substantially better than either individual model. The contribution of the neural architecture cannot be replaced by Brown clusters (c), and even word representations learned just on the Penn Treebank are surprisingly effective (f, h). However, as these embeddings are trained on a relatively small corpus (BLLIP minus the Penn Treebank), it is natural to wonder whether lesssyntactic embeddings trained on a larger corpus might be more useful. This is not the case: line (e) in Table 1 shows the performance of the neural CRF using the Wikipedia-trained word embeddings of Collobert et al. (2011), which do not perform better than the vectors of Bansal et al. (2014). To isolate the contribution of continuous word representations themselves, we also experimented with vectors trained on just the text from the training set of the Penn Treebank using the skip-gram model with a window size of 1. While these vectors are somewhat lower performing on their own (f), they still provide a surprising and noticeable gain when stacked on top of sparse features (h), again suggesting that dense and sparse representations have complementary strengths. This result also reinforces the notion that the utility of word vectors does not come primarily from importing information about out-of-vocabulary words (Andreas and Klein, 2014). Since the neural features incorporate information from unlabeled data, we should provide the F1 len ≤40 ∆ Neural CRF 90.97 — Nonlinearity ReLU 90.97 — Tanh 90.74 −0.23 Cube 89.94 −1.03 Depth 0 HL 90.54 −0.43 1 HL 90.97 — 2 HL 90.58 −0.39 Embed output 88.81 −2.16 Table 2: Exploration of other implementation choices in the feedforward neural network on sentences of length ≤40 from section 22 of the Penn Treebank. Rectified linear units perform better than tanh or cubic units, a network with one hidden layer performs best, and embedding the output feature vector gives worse performance. sparse model with similar information for a true apples-to-apples comparison. Brown clusters have been shown to be effective vehicles in the past (Koo et al., 2008; Turian et al., 2010; Bansal et al., 2014). We can incorporate Brown clusters into the baseline CRF model in an analogous way to how embedding features are used in the dense model: surface features are fired on Brown cluster identities (we use prefixes of length 4 and 10) of key words. We use the Brown clusters from Koo et al. (2008), which are trained on the same data as the vectors of Bansal et al. (2014). However, Table 1 shows that these features provide no benefit to the baseline model, which suggests either that it is difficult to learn reliable weights for these as sparse features or that different regularities are being captured by the word embeddings. 6 Design Choices The neural net design space is large, so we wish to analyze the particular design choices we made for this system by examining the performance of several variants of the neural net architecture used in our system. Table 2 shows development results from potential alternate architectural choices, which we now discuss. Choice of nonlinearity The choice of nonlinearity g has been frequently discussed in the neural network literature. Our choice g(x) = max(x, 0), a rectified linear unit, is increasingly popular in 307 computer vision (Krizhevsky et al., 2012). g(x) = tanh(x) is a traditional nonlinearity widely used throughout the history of neural nets (Bengio et al., 2003). g(x) = x3 (cube) was found to be most successful by Chen and Manning (2014). Table 2 compares the performance of these three nonlinearities. We see that rectified linear units perform the best, followed by tanh units, followed by cubic units.9 One drawback of tanh as an activation function is that it is easily “saturated” if the input to the unit is too far away from zero, causing the backpropagation of derivatives through that unit to essentially cease; this is known to cause problems for training, requiring special purpose machinery for use in deep networks (Ioffe and Szegedy, 2015). Depth Given that we are using rectified linear units, it bears asking whether or not our implementation is improving substantially over linear features of the continuous input. We can use the embedding vector of an anchored span v(fw) directly as input to a basic linear CRF, as shown in Figure 4a. Table 1 shows that the purely linear architecture (0 HL) performs surprisingly well, but is still less effective than the network with one hidden layer. This agrees with the results of Wang and Manning (2013), who noted that dense features typically benefit from nonlinear modeling. We also compare against a two-layer neural network, but find that this also performs worse than the one-layer architecture. Densifying output features Overall, it appears beneficial to use dense representations of surface features; a natural question that one might ask is whether the same technique can be applied to the sparse output feature vector fo. We can apply the approach of Srikumar and Manning (2014) and multiply the sparse output vector by a dense matrix K, giving the following scoring function (shown in Figure 4b): φ(w, r, s; H, W, K) = g(Hv(fw(w, s)))⊤WKfo(r) where W is now nh × noe and K is noe × no. WK can be seen a low-rank approximation of the original W at the output layer, similar to low-rank factorizations of parameter matrices used in past 9The performance of cube decreased substantially late in learning; it peaked at around 90.52. Dropout may be useful for alleviating this type of overfitting, but in our experiments we did not find dropout to be beneficial overall. fo W W h a) b) fo Kfo φ = g(Hv(fw))>WKfo φ = v(fw)>Wfo fw v(fw) fw v(fw) Figure 4: Two additional forms of the scoring function. a) Linear version of the dense model, equivalent to a CRF with continuous-valued input features. b) Version of the dense model where outputs are also embedded according to a learned matrix K. work (Lei et al., 2014). This approach saves us from having to learn a separate row of W for every rule in the grammar; if rules are given similar embeddings, then they will behave similarly according to the model. We experimented with noe = 20 and show the results in Table 2. Unfortunately, this approach does not seem to work well for parsing. Learning the output representation was empirically very unstable, and it also required careful initialization. We tried Gaussian initialization (as in the rest of our model) and initializing the model by clustering rules either randomly or according to their parent symbol. The latter is what is shown in the table, and gave substantially better performance. We hypothesize that blurring distinctions between output classes may harm the model’s ability to differentiate between closely-related symbols, which is required for good parsing performance. Using pretrained rule embeddings at this layer might also improve performance of this method. 7 Test Results We evaluate our system under two conditions: first, on the English Penn Treebank, and second, on the nine languages used in the SPMRL 2013 and 2014 shared tasks. 7.1 Penn Treebank Table 4 reports results on section 23 of the Penn Treebank (PTB). We focus our comparison on single parser systems as opposed to rerankers, ensembles, or self-trained methods (though these are also mentioned for context). First, we compare against 308 Arabic Basque French German Hebrew Hungarian Korean Polish Swedish Avg Dev, all lengths Hall et al. (2014) 78.89 83.74 79.40 83.28 88.06 87.44 81.85 91.10 75.95 83.30 This work* 80.68 84.37 80.65 85.25 89.37 89.46 82.35 92.10 77.93 84.68 Test, all lengths Berkeley 79.19 70.50 80.38 78.30 86.96 81.62 71.42 79.23 79.18 78.53 Berkeley-Tags 78.66 74.74 79.76 78.28 85.42 85.22 78.56 86.75 80.64 80.89 Crabb´e and Seddah (2014) 77.66 85.35 79.68 77.15 86.19 87.51 79.35 91.60 82.72 83.02 Hall et al. (2014) 78.75 83.39 79.70 78.43 87.18 88.25 80.18 90.66 82.00 83.17 This work* 80.24 85.41 81.25 80.95 88.61 90.66 82.23 92.97 83.45 85.08 Reranked ensemble 2014 Best 81.32 88.24 82.53 81.66 89.80 91.72 83.81 90.50 85.50 86.12 Table 3: Results for the nine treebanks in the SPMRL 2013/2014 Shared Tasks; all values are F-scores for sentences of all lengths using the version of evalb distributed with the shared task. Our parser substantially outperforms the strongest single parser results on this dataset (Hall et al., 2014; Crabb´e and Seddah, 2014). Berkeley-Tags is an improved version of the Berkeley parser designed for the shared task (Seddah et al., 2013). 2014 Best is a reranked ensemble of modified Berkeley parsers and constitutes the best published numbers on this dataset (Bj¨orkelund et al., 2013; Bj¨orkelund et al., 2014). F1 all Single model, PTB only Hall et al. (2014) 89.2 Berkeley 90.1 Carreras et al. (2008) 91.1 Shindo et al. (2012) single 91.1 Single model, PTB + vectors/clusters Zhu et al. (2013) 91.3 This work* 91.1 Extended conditions Charniak and Johnson (2005) 91.5 Socher et al. (2013) 90.4 Vinyals et al. (2014) single 90.5 Vinyals et al. (2014) ensemble 91.6 Shindo et al. (2012) ensemble 92.4 Table 4: Test results on section 23 of the Penn Treebank. We compare to several categories of parsers from the literatures. We outperform strong baselines such as the Berkeley Parser (Petrov and Klein, 2007) and the CVG Stanford parser (Socher et al., 2013) and we match the performance of sophisticated generative (Shindo et al., 2012) and discriminative (Carreras et al., 2008) parsers. four parsers trained only on the PTB with no auxiliary data: the CRF parser of Hall et al. (2014), the Berkeley parser (Petrov and Klein, 2007), the discriminative parser of Carreras et al. (2008), and the single TSG parser of Shindo et al. (2012). To our knowledge, the latter two systems are the highest performing in this PTB-only, single parser data condition; we match their performance at 91.1 F1, though we also use word vectors computed from unlabeled data. We further compare to the shiftreduce parser of Zhu et al. (2013), which uses unlabeled data in the form of Brown clusters. Our method achieves performance close to that of their parser. We also compare to the compositional vector grammar (CVG) parser of Socher et al. (2013) as well as the LSTM-based parser of Vinyals et al. (2014). The conditions these parsers are operating under are slightly different: the former is a reranker on top of the Stanford Parser (Klein and Manning, 2003) and the latter trains on much larger amounts of data parsed by a product of Berkeley parsers (Petrov, 2010). Regardless, we outperform the CVG parser as well as the single parser results from Vinyals et al. (2014). 7.2 SPMRL We also examine the performance of our parser on other languages, specifically the nine morphologically-rich languages used in the SPMRL 2013/2014 shared tasks (Seddah et al., 2013; Seddah et al., 2014). We train word vectors on the monolingual data distributed with the SPMRL 2014 shared task (typically 100M-200M tokens per language) using the skip-gram approach of word2vec with a window size of 1 309 (Mikolov et al., 2013).10 Here we use V = 1 in the backbone grammar, which we found to be beneficial overall. Table 3 shows that our system improves upon the performance of the parser from Hall et al. (2014) as well as the top single parser from the shared task (Crabb´e and Seddah, 2014), with robust improvements on all languages. 8 Conclusion In this work, we presented a CRF parser that scores anchored rule productions using dense input features computed from a feedforward neural net. Because the neural component is modularized, we can easily integrate it into a preexisting learning and inference framework based around dynamic programming of a discrete parse chart. Our combined neural and sparse model gives strong performance both on English and on other languages. Our system is publicly available at http://nlp.cs.berkeley.edu. Acknowledgments This work was partially supported by BBN under DARPA contract HR0011-12-C-0014, by a Facebook fellowship for the first author, and by a Google Faculty Research Award to the second author. Thanks to David Hall for assistance with the Epic parsing framework and for a preliminary implementation of the neural architecture, to Kush Rastogi for training word vectors on the SPMRL data, to Dan Jurafsky for helpful discussions, and to the anonymous reviewers for their insightful comments. References Jacob Andreas and Dan Klein. 2014. How much do word embeddings encode about syntax? In Proceedings of the Association for Computational Linguistics. Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring Continuous Word Representations for Dependency Parsing. In Proceedings of the Association for Computational Linguistics. Yonatan Belinkov, Tao Lei, Regina Barzilay, and Amir Globerson. 2014. Exploring Compositional Architectures and Word Vector Representations for Prepositional Phrase Attachment. Transactions of the Association for Computational Linguistics, 2:561–572. 10Training vectors with the SKIPDEP method of Bansal et al. (2014) did not substantially improve performance here. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A Neural Probabilistic Language Model. Journal of Machine Learning Research, 3:1137–1155, March. Anders Bj¨orkelund, Ozlem Cetinoglu, Rich´ard Farkas, Thomas Mueller, and Wolfgang Seeker. 2013. (Re)ranking Meets Morphosyntax: State-of-the-art Results from the SPMRL 2013 Shared Task. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Anders Bj¨orkelund, ¨Ozlem C¸ etino˘glu, Agnieszka Fale´nska, Rich´ard Farkas, Thomas Mueller, Wolfgang Seeker, and Zsolt Sz´ant´o. 2014. Introducing the IMS-Wrocław-Szeged-CIS entry at the SPMRL 2014 Shared Task: Reranking and Morpho-syntax meet Unlabeled Data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. Xavier Carreras, Michael Collins, and Terry Koo. 2008. TAG, Dynamic Programming, and the Perceptron for Efficient, Feature-rich Parsing. In Proceedings of the Conference on Computational Natural Language Learning. Eugene Charniak and Mark Johnson. 2005. Coarseto-Fine n-Best Parsing and MaxEnt Discriminative Reranking. In Proceedings of the Association for Computational Linguistics. Danqi Chen and Christopher D Manning. 2014. A Fast and Accurate Dependency Parser using Neural Networks. In Proceedings of Empirical Methods in Natural Language Processing. Wenliang Chen, Yue Zhang, and Min Zhang. 2014. Feature Embedding for Dependency Parsing. In Proceedings of the International Conference on Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural Language Processing (Almost) from Scratch. Journal of Machine Learning Research, 12:2493–2537. Benoit Crabb´e and Djam´e Seddah. 2014. Multilingual Discriminative Shift-Reduce Phrase Structure Parsing for the SPMRL 2014 Shared Task. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive Subgradient Methods for Online Learning and Stochastic Optimization. Journal of Machine Learning Research, 12:2121–2159, July. Jenny Rose Finkel, Alex Kleeman, and Christopher D. Manning. 2008. Efficient, Feature-based, Conditional Random Field Parsing. In Proceedings of the Association for Computational Linguistics. 310 David Hall, Greg Durrett, and Dan Klein. 2014. Less Grammar, More Features. In Proceedings of the Association for Computational Linguistics. James Henderson. 2003. Inducing History Representations for Broad Coverage Statistical Parsing. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Sergey Ioffe and Christian Szegedy. 2015. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv preprint, arXiv:1502.03167. Ozan ˙Irsoy and Claire Cardie. 2014. Opinion Mining with Deep Recurrent Neural Networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A Convolutional Neural Network for Modelling Sentences. In Proceedings of the Association for Computational Linguistics. Yoon Kim. 2014. Convolutional Neural Networks for Sentence Classification. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Dan Klein and Christopher D. Manning. 2003. Accurate Unlexicalized Parsing. In Proceedings of the Association for Computational Linguistics. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple Semi-supervised Dependency Parsing. In Proceedings of the Association for Computational Linguistics. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in Neural Information Processing Systems. Phong Le and Willem Zuidema. 2014. The insideoutside recursive neural network model for dependency parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-Rank Tensors for Scoring Dependency Structures. In Proceedings of the Association for Computational Linguistics. Omer Levy and Yoav Goldberg. 2014. DependencyBased Word Embeddings. In Proceedings of the Association for Computational Linguistics. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a Large Annotated Corpus of English: The Penn Treebank. Computational Linguistics, 19(2). Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient Estimation of Word Representations in Vector Space. In Proceedings of the International Conference on Learning Representations. Slav Petrov and Dan Klein. 2007. Improved Inference for Unlexicalized Parsing. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Slav Petrov and Dan Klein. 2008. Sparse Multi-Scale Grammars for Discriminative Latent Variable Parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Slav Petrov. 2010. Products of Random Latent Variable Grammars. In Proceedings of the North American Chapter of the Association for Computational Linguistics. Djam´e Seddah, Reut Tsarfaty, Sandra K¨ubler, Marie Candito, Jinho D. Choi, Rich´ard Farkas, Jennifer Foster, Iakes Goenaga, Koldo Gojenola Galletebeitia, Yoav Goldberg, Spence Green, Nizar Habash, Marco Kuhlmann, Wolfgang Maier, Joakim Nivre, Adam Przepi´orkowski, Ryan Roth, Wolfgang Seeker, Yannick Versley, Veronika Vincze, Marcin Woli´nski, and Alina Wr´oblewska. 2013. Overview of the SPMRL 2013 Shared Task: A Cross-Framework Evaluation of Parsing Morphologically Rich Languages. In Proceedings of the Fourth Workshop on Statistical Parsing of Morphologically-Rich Languages. Djam´e Seddah, Sandra K¨ubler, and Reut Tsarfaty. 2014. Introducing the SPMRL 2014 Shared Task on Parsing Morphologically-rich Languages. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of Non-Canonical Languages. Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian Symbol-refined Tree Substitution Grammars for Syntactic Parsing. In Proceedings of the Association for Computational Linguistics. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing With Compositional Vector Grammars. In Proceedings of the Association for Computational Linguistics. Vivek Srikumar and Christopher D Manning. 2014. Learning Distributed Representations for Structured Output Prediction. In Advances in Neural Information Processing Systems. Yuta Tsuboi. 2014. Neural Networks Leverage Corpus-wide Information for Part-of-speech Tagging. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word Representations: A Simple and General Method for Semi-supervised Learning. In Proceedings of the Association for Computational Linguistics. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2014. Grammar as a Foreign Language. CoRR, abs/1412.7449. 311 Mengqiu Wang and Christopher D. Manning. 2013. Effect of Non-linear Deep Architecture in Sequence Labeling. In Proceedings of the International Joint Conference on Natural Language Processing. Matthew D. Zeiler. 2012. ADADELTA: An Adaptive Learning Rate Method. CoRR, abs/1212.5701. Muhua Zhu, Yue Zhang, Wenliang Chen, Min Zhang, and Jingbo Zhu. 2013. Fast and Accurate ShiftReduce Constituent Parsing. In Proceedings of the Association for Computational Linguistics. 312
2015
30
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 313–322, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics An Effective Neural Network Model for Graph-based Dependency Parsing Wenzhe Pei Tao Ge Baobao Chang∗ Key Laboratory of Computational Linguistics, Ministry of Education, School of Electronics Engineering and Computer Science, Peking University, No.5 Yiheyuan Road, Haidian District, Beijing, 100871, China Collaborative Innovation Center for Language Ability, Xuzhou, 221009, China. {peiwenzhe,getao,chbb}@pku.edu.cn Abstract Most existing graph-based parsing models rely on millions of hand-crafted features, which limits their generalization ability and slows down the parsing speed. In this paper, we propose a general and effective Neural Network model for graph-based dependency parsing. Our model can automatically learn high-order feature combinations using only atomic features by exploiting a novel activation function tanhcube. Moreover, we propose a simple yet effective way to utilize phrase-level information that is expensive to use in conventional graph-based parsers. Experiments on the English Penn Treebank show that parsers based on our model perform better than conventional graph-based parsers. 1 Introduction Dependency parsing is essential for computers to understand natural languages, whose performance may have a direct effect on many NLP application. Due to its importance, dependency parsing, has been studied for tens of years. Among a variety of dependency parsing approaches (McDonald et al., 2005; McDonald and Pereira, 2006; Carreras, 2007; Koo and Collins, 2010; Zhang and Nivre, 2011), graph-based models seem to be one of the most successful solutions to the challenge due to its ability of scoring the parsing decisions on whole-tree basis. Typical graph-based models factor the dependency tree into subgraphs, ranging from the smallest edge (first-order) to a controllable bigger subgraph consisting of more than one single edge (second-order and third order), and score the whole tree by summing scores of the subgraphs. In these models, subgraphs are usually represented as a high-dimensional feature vectors ∗Corresponding author which are fed into a linear model to learn the feature weight for scoring the subgraphs. In spite of their advantages, conventional graphbased models rely heavily on an enormous number of hand-crafted features, which brings about serious problems. First, a mass of features could put the models in the risk of overfitting and slow down the parsing speed, especially in the highorder models where combinational features capturing interactions between head, modifier, siblings and (or) grandparent could easily explode the feature space. In addition, feature design requires domain expertise, which means useful features are likely to be neglected due to a lack of domain knowledge. As a matter of fact, these two problems exist in most graph-based models, which have stuck the development of dependency parsing for a few years. To ease the problem of feature engineering, we propose a general and effective Neural Network model for graph-based dependency parsing in this paper. The main advantages of our model are as follows: • Instead of using large number of hand-crafted features, our model only uses atomic features (Chen et al., 2014) such as word unigrams and POS-tag unigrams. Feature combinations and high-order features are automatically learned with our novel activation function tanh-cube, thus alleviating the heavy burden of feature engineering in conventional graph-based models (McDonald et al., 2005; McDonald and Pereira, 2006; Koo and Collins, 2010). Not only does it avoid the risk of overfitting but also it discovers useful new features that have never been used in conventional parsers. • We propose to exploit phrase-level information through distributed representation for phrases (phrase embeddings). It not only en313 Figure 1: First-order and Second-order factorization strategy. Here h stands for head word, m stands for modifier word and s stands for the sibling of m. ables our model to exploit richer context information that previous work did not consider due to the curse of dimension but also captures inherent correlations between phrases. • Unlike other neural network based models (Chen et al., 2014; Le and Zuidema, 2014) where an additional parser is needed for either extracting features (Chen et al., 2014) or generating k-best list for reranking (Le and Zuidema, 2014), both training and decoding in our model are performed based on our neural network architecture in an effective way. • Our model does not impose any change to the decoding process of conventional graphbased parsing model. First-order, secondorder and higher order models can be easily implemented using our model. We implement three effective models with increasing expressive capabilities. The first model is a simple first-order model that uses only atomic features and does not use any combinational features. Despite its simpleness, it outperforms conventional first-order model (McDonald et al., 2005) and has a faster parsing speed. To further strengthen our parsing model, we incorporate phrase embeddings into the model, which significantly improves the parsing accuracy. Finally, we extend our first-order model to a secondorder model that exploits interactions between two adjacent dependency edges as in McDonald and Pereira (2006) thus further improves the model performance. We evaluate our models on the English Penn Treebank. Experiment results show that both our first-order and second-order models outperform the corresponding conventional models. 2 Neural Network Model A dependency tree is a rooted, directed tree spanning the whole sentence. Given a sentence x, graph-based models formulates the parsing process as a searching problem: y∗(x) = arg max ˆy∈Y (x) Score(x, ˆy(x); θ) (1) where y∗(x) is tree with highest score, Y (x) is the set of all trees compatible with x, θ are model parameters and Score(x, ˆy(x); θ) represents how likely that a particular tree ˆy(x) is the correct analysis for x. However, the size of Y (x) is exponential large, which makes it impractical to solve equation (1) directly. Previous work (McDonald et al., 2005; McDonald and Pereira, 2006; Koo and Collins, 2010) assumes that the score of ˆy(x) factors through the scores of subgraphs c of ˆy(x) so that efficient algorithms can be designed for decoding: Score(x, ˆy(x); θ) = X c∈ˆy(x) ScoreF(x, c; θ) (2) Figure 1 gives two examples of commonly used factorization strategy proposed by Mcdonald et.al (2005) and Mcdonald and Pereira (2006). The simplest subgraph uses a first-order factorization (McDonald et al., 2005) which decomposes a dependency tree into single dependency arcs (Figure 1(a)). Based on the first-order model, secondorder factorization (McDonald and Pereira, 2006) (Figure 1(b)) brings sibling information into decoding. Specifically, a sibling part consists of a triple of indices (h, m, s) where (h, m) and (h, s) are dependencies and s and m are successive modifiers to the same side of h. The most common choice for ScoreF(x, c; θ), which is the score function for subgraph c in the tree, is a simple linear function: ScoreF(x, c; θ) = w · f(x, c) (3) where f(x, c) is the feature representation of subgraph c and w is the corresponding weight vector. However, the effectiveness of this function relies heavily on the design of feature vector f(x, c). In previous work (McDonald et al., 2005; McDonald and Pereira, 2006), millions of hand-crafted features were used to capture context and structure information in the subgraph which not only limits the model’s ability to generalize well but only slows down the parsing speed. 314 Figure 2: Architecture of the Neural Network In our work, we propose a neural network model for scoring subgraph c in the tree: ScoreF(x, c; θ) = NN(x, c) (4) where NN is our scoring function based on neural network (Figure 2). As we will show in the following sections, it alleviates the heavy burden of feature engineering in conventional graph-based models and achieves better performance by automatically learning useful information in the data. The effectiveness of our neural network depends on five key components: Feature Embeddings, Phrase Embeddings, Direction-specific transformation, Learning Feature Combinations and Max-Margin Training. 2.1 Feature Embeddings As shown in Figure 2, part of the input to the neural network is feature representation of the subgraph. Instead of using millions of features as in conventional models, we only use use atomic features (Chen et al., 2014) such as word unigrams and POS-tag unigrams, which are less likely to be sparse. The detailed atomic features we use will be described in Section 3. Unlike conventional models, the atomic features in our model are transformed into their corresponding distributed representations (feature embeddings). The idea of distributed representation for symbolic data is one of the most important reasons why neural network works in NLP tasks. It is shown that similar features will have similar embeddings which capture the syntactic and semantic information behind features (Bengio et al., Figure 3: Illustration for phrase embeddings. h, m and x0 to x6 are words in the sentence. 2003; Collobert et al., 2011; Schwenk et al., 2012; Mikolov et al., 2013; Socher et al., 2013; Pei et al., 2014). Formally, we have a feature dictionary D of size |D|. Each feature f ∈D is represented as a realvalued vector (feature embedding) Embed(f) ∈ Rd where d is the dimensionality of the vector space. All feature embeddings stacking together forms the embedding matrix M ∈Rd×|D|. The embedding matrix M is initialized randomly and trained by our model (Section 2.6). 2.2 Phrase Embeddings Context information of word pairs1 such as the dependency pair (h, m) has been widely believed to be useful in graph-based models (McDonald et al., 2005; McDonald and Pereira, 2006). Given a sentence x, the context for h and m includes three context parts: prefix, infix and suffix, as illustrated in Figure 3. We call these parts phrases in our work. Context representation in conventional models are limited: First, phrases cannot be used as features directly because of the data sparseness problem. Therefore, phrases are backed off to low-order representation such as bigrams and trigrams. For example, Mcdonald et.al (2005) used tri-gram features of infix between head-modifier pair (h, m). Sometimes even tri-grams are expensive to use, which is the reason why Mcdonald and Pereira (2006) chose to ignore features over triples of words in their second-order model to prevent from exploding the size of the feature space. Sec1A word pair is not limited to the dependency pair (h, m). It could be any pair with particular relation (e.g., sibling pair (s, m) in Figure 1). Figure 3 only uses (h, m) as an example. 315 ond, bigrams or tri-grams are lexical features thus cannot capture syntactic and semantic information behind phrases. For instance, “hit the ball” and “kick the football” should have similar representations because they share similar syntactic structures, but lexical tri-grams will fail to capture their similarity. Unlike previous work, we propose to use distributed representation (phrase embedding) of phrases to capture phrase-level information. We use a simple yet effective way to calculate phrase embeddings from word (POS-tag) embeddings. As shown in Figure 3, we average the word embeddings in prefix, infix and suffix respectively and get three global word-phrase embeddings. For pairs where no prefix or suffix exists, the corresponding embedding is set to zero. We also get three global POS-phrase embeddings which are calculated in the same way as words. These embeddings are then concatenated with feature embeddings and fed to the following hidden layer. Phrase embeddings provide panorama representation of the context, allowing our model to capture richer context information compared with the back-off tri-gram representation. Moreover, as a distributed representation, phrase embeddings perform generalization over specific phrases, thus better capture the syntactic and semantic information than back-off tri-grams. 2.3 Direction-specific Transformation In dependency representation of sentence, the edge direction indicates which one of the words is the head h and which one is the modifier m. Unlike previous work (McDonald et al., 2005; McDonald and Pereira, 2006) that models the edge direction as feature to be conjoined with other features, we model the edge direction with directionspecific transformation. As shown in Figure 2, the parameters in hidden layer (W d h, bd h) and the output layer (W d o , bd o) are bound with index d ∈{0, 1} which indicates the direction between head and modifier (0 for left arc and 1 for right arc). In this way, the model can learn direction-specific parameters and automatically capture the interactions between edge direction and other features. 2.4 Learning Feature Combination The key to the success of graph-based dependency parsing is the design of features, especially combinational features. Effective as these features are, as we have said in Section 1, they are prone to overfitting and hard to design. In our work, we introduce a new activation function that can automatically learn these feature combinations. As shown in Figure 2, we first concatenate the embeddings into a single vector a. Then a is fed into the next layer which performs linear transformation followed by an element-wise activation function g: h = g(W d ha + bd h) (5) Our new activation function g is defined as follows: g(l) = tanh(l3 + l) (6) where l is the result of linear transformation and tanh is the hyperbolic tangent activation function widely used in neural networks. We call this new activation function tanh-cube. As we can see, without the cube term, tanh-cube would be just the same as the conventional nonlinear transformation in most neural networks. The cube extension is added to enhance the ability to capture complex interactions between input features. Intuitively, the cube term in each hidden unit directly models feature combinations in a multiplicative way: (w1a1 + w2a2 + ... + wnan + b)3 = X i,j,k (wiwjwk)aiajak + X i,j b(wiwj)aiaj... These feature combinations are hand-designed in conventional graph-based models but our model learns these combinations automatically and encodes them in the model parameters. Similar ideas were also proposed in previous works (Socher et al., 2013; Pei et al., 2014; Chen and Manning, 2014). Socher et.al (2013) and Pei et.al (2014) used a tensor-based activation function to learn feature combinations. However, tensor-based transformation is quite slow even with tensor factorization (Pei et al., 2014). Chen and Manning (2014) proposed to use cube function g(l) = l3 which inspires our tanh-cube function. Compared with cube function, tanh-cube has three advantages: • The cube function is unbounded, making the activation output either too small or too big if the norm of input l is not properly controlled, especially in deep neural network. On the 316 contrary, tanh-cube is bounded by the tanh function thus safer to use in deep neural network. • Intuitively, the behavior of cube function resembles the “polynomial kernel” in SVM. In fact, SVM can be seen as a special onehidden-layer neural network where the kernel function that performs non-linear transformation is seen as a hidden layer and support vectors as hidden units. Compared with cube function, tanh-cube combines the power of “kernel function” with the tanh non-linear transformation in neural network. • Last but not least, as we will show in Section 4, tanh-cube converges faster than the cube function although the rigorous proof is still open to investigate. 2.5 Model Output After the non-linear transformation of hidden layer, the score of the subgraph c is calculated in the output layer using a simple linear function: ScoreF(x, c) = W d o h + bd o (7) The output score ScoreF(x, c) ∈R|L| is a score vector where |L| is the number of dependency types and each dimension of ScoreF(x, c) is the score for each kind of dependency type of headmodifier pair (i.e. (h, m) in Figure 1). 2.6 Max-Margin Training The parameters of our model are θ = {W d h, bd h, W d o , bd o, M}. All parameters are initialized with uniform distribution within (-0.01, 0.01). For model training, we use the Max-Margin criterion. Given a training instance (x, y), we search for the dependency tree with the highest score computed as equation (1) in Section 2. The object of Max-Margin training is that the highest scoring tree is the correct one: y∗= y and its score will be larger up to a margin to other possible tree ˆy ∈Y (x): Score(x, y; θ) ≥Score(x, ˆy; θ) + △(y, ˆy) The structured margin loss △(y, ˆy) is defined as: △(y, ˆy) = n X j κ1{h(y, xj) ̸= h(ˆy, xj)} 1-order-atomic h−2.w, h−1.w, h.w, h1.w, h2.w h−2.p, h−1.p, h.p, h1.p, h2.p m−2.w, m−1.w, m.w, m1.w, m2.w m−2.p, m−1.p, m.p, m1.p, m2.p dis(h, m) 1-order-phrase + hm prefix.w, hm infix.w, hm suffix.w + hm prefix.p, hm infix.p, hm suffix.p 2-order-phrase + s−2.w, s−1.w, s.w, s1.w, s2.w + s−2.p, s−1.p, s.p, s1.p, s2.p + sm infix.w, sm infix.p Table 1: Features in our three models. w is short for word and p for POS-tag. h indicates head and m indicates modifier. The subscript represents the relative position to the center word. dis(h, m) is the distance between head and modifier. hm prefix, hm infix and hm suffix are phrases for head-modifier pair (h, m). s indicates the sibling in second-order model. sm infix is the infix phrase between sibling pair (s, m) where n is the length of sentence x, h(y, xj) is the head (with type) for the j-th word of x in tree y and κ is a discount parameter. The loss is proportional to the number of word with an incorrect head and edge type in the proposed tree. This leads to the regularized objective function for m training examples: J(θ) = 1 m m X i=1 li(θ) + λ 2 ||θ||2 li(θ) = max ˆy∈Y (xi)(Score(xi, ˆy; θ) + △(yi, ˆy)) −Score(xi, yi; θ)) (8) We use the diagonal variant of AdaGrad (Duchi et al., 2011) with minibatchs (batch size = 20) to minimize the object function. We also apply dropout (Hinton et al., 2012) with 0.5 rate to the hidden layer. 3 Model Implementation Base on our Neural Network model, we present three model implementations with increasing expressive capabilities in this section. 3.1 First-order models We first implement two first-order models: 1order-atomic and 1-order-phrase. We use the Eisner (2000) algorithm for decoding. The first two rows of Table 1 list the features we use in these two models. 1-order-atomic only uses atomic features as shown in the first row of Table 1. Specifically, the 317 Models Dev Test Speed (sent/s) UAS LAS UAS LAS First-order MSTParser-1-order 92.01 90.77 91.60 90.39 20 1-order-atomic-rand 92.00 90.71 91.62 90.41 55 1-order-atomic 92.19 90.94 92.14 90.92 55 1-order-phrase-rand 92.47 91.19 92.25 91.05 26 1-order-phrase 92.82 91.48 92.59 91.37 26 Second-order MSTParser-2-order 92.70 91.48 92.30 91.06 14 2-order-phrase-rand 93.39 92.10 92.99 91.79 10 2-order-phrase 93.57 92.29 93.29 92.13 10 Third-order (Koo and Collins, 2010) 93.49 N/A 93.04 N/A N/A Table 2: Comparison with conventional graph-based models. head word and its local neighbor words that are within the distance of 2 are selected as the head’s word unigram features. The modifier’s word unigram features is extracted in the same way. We also use the POS-tags of the corresponding word features and the distance between head and modifier as additional atomic features. We then improved 1-order-atomic to 1-orderphrase by incorporating additional phrase embeddings. The three phrase embeddings of headmodifier pair (h, m): hm prefix, hm infix and hm suffix are calculated as in Section 2.2. 3.2 Second-order model Our model can be easily extended to a secondorder model using the second-order decoding algorithm (Eisner, 1996; McDonald and Pereira, 2006). The third row of Table 1 shows the additional features we use in our second-order model. Sibling node and its local context are used as additional atomic features. We also used the infix embedding for the infix between sibling pair (s, m), which we call sm infix. It is calculated in the same way as infix between head-modifier pair (h, m) (i.e., hm infix) in Section 2.2 except that the word pair is now s and m. For cases where no sibling information is available, the corresponding sibling-related embeddings are set to zero vector. 4 Experiments 4.1 Experiment Setup We use the English Penn Treebank (PTB) to evaluate our model implementations and Yamada and Matsumoto (2003) head rules are used to extract dependency trees. We follow the standard splits of PTB3, using section 2-21 for training, section 22 as development set and 23 as test set. The Stanford POS Tagger (Toutanova et al., 2003) with ten-way jackknifing of the training data is used for assigning POS tags (accuracy ≈97.2%). Hyper-parameters of our models are tuned on the development set and their final settings are as follows: embedding size d = 50, hidden layer (Layer 2) size = 200, regularization parameter λ = 10−4, discount parameter for margin loss κ = 0.3, initial learning rate of AdaGrad alpha = 0.1. 4.2 Experiment Results Table 2 compares our models with several conventional graph-based parsers. We use MSTParser2 for conventional first-order model (McDonald et al., 2005) and second-order model (McDonald and Pereira, 2006). We also include the result of a third-order model of Koo and Collins (2010) for comparison3. For our models, we report the results with and without unsupervised pre-training. Pretraining only trains the word-based feature embeddings on Gigaword corpus (Graff et al., 2003) using word2vec4 and all other parameters are still initialized randomly. In all experiments, we report unlabeled attachment scores (UAS) and labeled attachment scores (LAS) and punctuation5 is excluded in all evaluation metrics. The parsing speeds are measured on a workstation with Intel Xeon 3.4GHz CPU and 32GB RAM. As we can see, even with random initialization, 1-order-atomic-rand performs as well as conventional first-order model and both 1-order-phrase2http://sourceforge.net/projects/ mstparser 3Note that Koo and Collins (2010)’s third-order model and our models are not strict comparable since their model is an unlabeled model. 4https://code.google.com/p/word2vec/ 5Following previous work, a token is a punctuation if its POS tag is {“ ” : , .} 318 Figure 4: Convergence curve for tanh-cube and cube activation function. rand and 2-order-phrase-rand perform better than conventional models in MSTParser. Pretraining further improves the performance of all three models, which is consistent with the conclusion of previous work (Pei et al., 2014; Chen and Manning, 2014). Moreover, 1-order-phrase performs better than 1-order-atomic, which shows that phrase embeddings do improve the model. 2order-phrase further improves the performance because of the more expressive second-order factorization. All three models perform significantly better than their counterparts in MSTParser where millions of features are used and 1-order-phrase works surprisingly well that it even beats the conventional second-order model. With regard to parsing speed, 1-order-atomic is the fastest while other two models have similar speeds as MSTParser. Further speed up could be achieved by using pre-computing strategy as mentioned in Chen and Manning (2014). We did not try this strategy since parsing speed is not the main focus of this paper. Model tanh-cube cube tanh 1-order-atomic 92.19 91.97 91.73 1-order-phrase 92.82 92.25 92.13 2-order-phrase 93.57 92.95 92.91 Table 3: Model Performance of different activation functions. We also investigated the effect of different activation functions. We trained our models with the same configuration except for the activation function. Table 3 lists the UAS of three models on development set. Feature Type Instance Neighboors Words (word2vec) in the, of, and, for, from his himself, her, he, him, father which its, essentially, similar, that, also Words (Our model) in on, at, behind, among, during his her, my, their, its, he which where, who, whom, whose, though POS-tags NN NNPS, NNS, EX, NNP, POS JJ JJR, JJS, PDT, RBR, RBS Table 4: Examples of similar words and POS-tags according to feature embeddings. As we can see, tanh-cube function outperforms cube function because of advantages we mentioned in Section 2.4. Moreover, both tanh-cube function and cube function performs better than tanh function. The reason is that the cube term can capture more interactions between input features. We also plot the UAS of 2-order-phrase during each iteration of training. As shown in Figure 4, tanh-cube function converges faster than cube function. 4.3 Qualitative Analysis In order to see why our models work, we made qualitative analysis on different aspects of our model. Ability of Feature Abstraction Feature embeddings give our model the ability of feature abstraction. They capture the inherent correlations between features so that syntactic similar features will have similar representations, which makes our model generalizes well on unseen data. Table 4 shows the effect of different feature embeddings which are obtained from 2-orderphrase after training. For each kind of feature type, we list several features as well as top 5 features that are nearest (measured by Euclidean distance) to the corresponding feature according to their embeddings. We first analysis the effect of word embeddings after training. For comparison, we also list the initial word embeddings in word2vec. As we can see, in word2vec word embeddings, words that are similar to in and which tends to be those 319 Phrase Neighboor On a Saturday morning On Monday night football On Sunday On Saturday On Tuesday afternoon On recent Saturday morning most of it of it of it all some of it also most of these are only some of big investment bank great investment bank bank investment entire equity investment another cash equity investor real estate lending division Table 5: Examples of similar phrases according to phrase embeddings. co-occuring with them and for word his, similar words are morphologies of he. On the contrary, similar words measured by our embeddings have similar syntactic functions. This is helpful for dependency parsing since parsing models care more about the syntactic functions of words rather than their collocations or morphologies. POS-tag embeddings also show similar behavior with word embeddings. As shown in Table 4, our model captures similarities between POS-tags even though their embeddings are initialized randomly. We also investigated the effect of phrase embeddings in the same way as feature embeddings. Table 5 lists the examples of similar phrases. Our phrase embeddings work pretty well given that only a simple averaging strategy is used. Phrases that are close to each other tend to share similar syntactic and semantic information. By using phrase embeddings, our model sees panorama of the context rather than limited word tri-grams and thus captures richer context information, which is the reason why phrase embeddings significantly improve the performance. Ability of Feature Learning Finally, we try to unveil the mysterious hidden layer and investigate what features it learns. For each hidden unit of 2-order-phrase, we get its connections with embeddings (i.e., W d h in Figure 2) and pick the connections whose weights have absolute value > 0.1. We sampled several hidden units and invenstigated which features their highly weighted connections belong to: • Hidden 1: h.w, m.w, h−1.w, m1.w • Hidden 2: h.p, m.p, s.p • Hidden 3: hm infix.p, hm infix.w, hm prefix.w • Hidden 4: hm infix.w, hm prefix.w, sm infix.w • Hidden 5: hm infix.p, hm infix.w, hm suffix.w The samples above give qualitative results of what features the hidden layer learns: • Hidden unit 1 and 2 show that atomic features of head, modifier, sibling and their local context words are useful in our model, which is consistent with our expectations since these features are also very important features in conventional graph-based models (McDonald and Pereira, 2006). • Features in the same hidden unit will “combine” with each other through our tanh-cube activation function. As we can see, feature combination in hidden unit 2 were also used in Mcdonald and Pereira (2006). However, these feature combinations are automatically captured by our model without the laborintensive feature engineering. • Hidden unit 3 to 5 show that phrase-level information like hm prefix, hm suffix and sm infix are effective in our model. These features are not used in conventional secondorder model (McDonald and Pereira, 2006) because they could explode the feature space. Through our tanh-cube activation function, our model further captures the interactions between phrases and other features without the concern of overfitting. 5 Related Work Models for dependency parsing have been studied with considerable effort in the NLP community. Among them, we only focus on the graphbased models here. Most previous systems address this task by using linear statistical models with carefully designed context and structure features. The types of features available rely on tree factorization and decoding algorithm. Mcdonald et.al (2005) proposed the first-order model which is also know as arc-factored model. Efficient decoding can be performed with Eisner (2000) algorithm in O(n3) time and O(n2) space. Mcdonald and Pereira (2006) further extend the first-order model to second-order model where sibling information is available during decoding. Eisner (2000) 320 algorithm can be modified trivially for secondorder decoding. Carreras (2007) proposed a more powerful second-order model that can score both sibling and grandchild parts with the cost of O(n4) time and O(n3) space. To exploit more structure information, Koo and Collins (2010) proposed three third-order models with computational requirements of O(n4) time and O(n3) space. Recently, neural network models have been increasingly focused on for their ability to minimize the effort in feature engineering. Chen et.al (2014) proposed an approach to automatically learning feature embeddings for graph-based dependency parsing. The learned feature embeddings are used as additional features in conventional graph-based model. Le and Zuidema (2014) proprosed an infinite-order model based on recursive neural network. However, their model can only be used as an reranking model since decoding is intractable. Compared with these work, our model is a general and standalone neural network model. Both training and decoding in our model are performed based on our neural network architecture in an effective way. Although only first-order and second-order models are implemented in our work, higher-order graph-based models can be easily implemented using our model. 6 Conclusion In this paper, we propose a general and effective neural network model that can automatically learn feature combinations with our novel activation function. Moreover, we introduce a simple yet effect way to utilize phrase-level information, which greatly improves the model performance. Experiments on the benchmark dataset show that our model achieves better results than conventional models. Acknowledgments This work is supported by National Natural Science Foundation of China under Grant No. 61273318 and National Key Basic Research Program of China 2014CB340504. We want to thank Miaohong Chen and Pingping Huang for their valuable comments on the initial idea and helping pre-process the data. References Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Xavier Carreras. 2007. Experiments with a higherorder projective dependency parser. In EMNLPCoNLL, pages 957–961. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750, Doha, Qatar, October. Association for Computational Linguistics. Wenliang Chen, Yue Zhang, and Min Zhang. 2014. Feature embedding for dependency parsing. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 816–826, Dublin, Ireland, August. Dublin City University and Association for Computational Linguistics. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 999999:2121–2159. Jason M Eisner. 1996. Three new probabilistic models for dependency parsing: An exploration. In Proceedings of the 16th conference on Computational linguistics-Volume 1, pages 340–345. Association for Computational Linguistics. Jason Eisner. 2000. Bilexical grammars and their cubic-time parsing algorithms. In Advances in probabilistic and other parsing technologies, pages 29– 61. Springer. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia. Geoffrey E Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing coadaptation of feature detectors. arXiv preprint arXiv:1207.0580. Terry Koo and Michael Collins. 2010. Efficient thirdorder dependency parsers. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1–11. Association for Computational Linguistics. 321 Phong Le and Willem Zuidema. 2014. The insideoutside recursive neural network model for dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 729–739, Doha, Qatar, October. Association for Computational Linguistics. Ryan T McDonald and Fernando CN Pereira. 2006. Online learning of approximate dependency parsing algorithms. In EACL. Citeseer. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 91–98. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–303, Baltimore, Maryland, June. Association for Computational Linguistics. Holger Schwenk, Anthony Rousseau, and Mohammed Attik. 2012. Large, pruned or continuous space language models on a gpu for statistical machine translation. In Proceedings of the NAACL-HLT 2012 Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Language Modeling for HLT, pages 11–19. Association for Computational Linguistics. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1631–1642, Seattle, Washington, USA, October. Association for Computational Linguistics. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language TechnologyVolume 1, pages 173–180. Association for Computational Linguistics. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proceedings of IWPT, volume 3, pages 195–206. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 188–193, Portland, Oregon, USA, June. Association for Computational Linguistics. 322
2015
31
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 323–333, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Structured Training for Neural Network Transition-Based Parsing David Weiss Chris Alberti Michael Collins Slav Petrov Google Inc New York, NY {djweiss,chrisalberti,mjcollins,slav}@google.com Abstract We present structured perceptron training for neural network transition-based dependency parsing. We learn the neural network representation using a gold corpus augmented by a large number of automatically parsed sentences. Given this fixed network representation, we learn a final layer using the structured perceptron with beam-search decoding. On the Penn Treebank, our parser reaches 94.26% unlabeled and 92.41% labeled attachment accuracy, which to our knowledge is the best accuracy on Stanford Dependencies to date. We also provide indepth ablative analysis to determine which aspects of our model provide the largest gains in accuracy. 1 Introduction Syntactic analysis is a central problem in language understanding that has received a tremendous amount of attention. Lately, dependency parsing has emerged as a popular approach to this problem due to the availability of dependency treebanks in many languages (Buchholz and Marsi, 2006; Nivre et al., 2007; McDonald et al., 2013) and the efficiency of dependency parsers. Transition-based parsers (Nivre, 2008) have been shown to provide a good balance between efficiency and accuracy. In transition-based parsing, sentences are processed in a linear left to right pass; at each position, the parser needs to choose from a set of possible actions defined by the transition strategy. In greedy models, a classifier is used to independently decide which transition to take based on local features of the current parse configuration. This classifier typically uses hand-engineered features and is trained on individual transitions extracted from the gold transition sequence. While extremely fast, these greedy models typically suffer from search errors due to the inability to recover from incorrect decisions. Zhang and Clark (2008) showed that a beamsearch decoding algorithm utilizing the structured perceptron training algorithm can greatly improve accuracy. Nonetheless, significant manual feature engineering was required before transitionbased systems provided competitive accuracy with graph-based parsers (Zhang and Nivre, 2011), and only by incorporating graph-based scoring functions were Bohnet and Kuhn (2012) able to exceed the accuracy of graph-based approaches. In contrast to these carefully hand-tuned approaches, Chen and Manning (2014) recently presented a neural network version of a greedy transition-based parser. In their model, a feedforward neural network with a hidden layer is used to make the transition decisions. The hidden layer has the power to learn arbitrary combinations of the atomic inputs, thereby eliminating the need for hand-engineered features. Furthermore, because the neural network uses a distributed representation, it is able to model lexical, part-of-speech (POS) tag, and arc label similarities in a continuous space. However, although their model outperforms its greedy hand-engineered counterparts, it is not competitive with state-of-the-art dependency parsers that are trained for structured search. In this work, we combine the representational power of neural networks with the superior search enabled by structured training and inference, making our parser one of the most accurate dependency parsers to date. Training and testing on the Penn Treebank (Marcus et al., 1993), our transition-based parser achieves 93.99% unlabeled (UAS) / 92.05% labeled (LAS) attachment accuracy, outperforming the 93.22% UAS / 91.02% LAS of Zhang and McDonald (2014) and 93.27 UAS / 91.19 LAS of Bohnet and Kuhn (2012). In addition, by incorporating unlabeled data into training, we further improve the accuracy of our model to 94.26% UAS / 92.41% LAS (93.46% 323 UAS / 91.49% LAS for our greedy model). In our approach we start with the basic structure of Chen and Manning (2014), but with a deeper architecture and improvements to the optimization procedure. These modifications (Section 2) increase the performance of the greedy model by as much as 1%. As in prior work, we train the neural network to model the probability of individual parse actions. However, we do not use these probabilities directly for prediction. Instead, we use the activations from all layers of the neural network as the representation in a structured perceptron model that is trained with beam search and early updates (Section 3). On the Penn Treebank, this structured learning approach significantly improves parsing accuracy by 0.8%. An additional contribution of this work is an effective way to leverage unlabeled data. Neural networks are known to perform very well in the presence of large amounts of training data; however, obtaining more expert-annotated parse trees is very expensive. To this end, we generate large quantities of high-confidence parse trees by parsing unlabeled data with two different parsers and selecting only the sentences for which the two parsers produced the same trees (Section 3.3). This approach is known as “tri-training” (Li et al., 2014) and we show that it benefits our neural network parser significantly more than other approaches. By adding 10 million automatically parsed tokens to the training data, we improve the accuracy of our parsers by almost ∼1.0% on web domain data. We provide an extensive exploration of our model in Section 5 through ablative analysis and other retrospective experiments. One of the goals of this work is to provide guidance for future refinements and improvements on the architecture and modeling choices we introduce in this paper. Finally, we also note that neural network representations have a long history in syntactic parsing (Henderson, 2004; Titov and Henderson, 2007; Titov and Henderson, 2010); however, like Chen and Manning (2014), our network avoids any recurrent structure so as to keep inference fast and efficient and to allow the use of simple backpropagation to compute gradients. Our work is also not the first to apply structured training to neural networks (see e.g. Peng et al. (2009) and Do and Artires (2010) for Conditional Random Field (CRF) training of neural networks). Our paper exh0 = [XgEg] Embedding Layer Input Hidden Layers argmax y2GEN(x) m X j=1 v(yj) · φ(x, cj) h2 = max{0, W2h1 + b2} h1 = max{0, W1h0 + b1} P(y) / exp{β> y h2 + by} 8g 2 {word, tag, label} Buffer NN DT news The det NN JJ VBD nsubj had little effect . ROOT ROOT Stack … … … … … Softmax Layer Perceptron Layer Features Extracted early updates (section 3). Structured learning reduces bias and significantly improves parsing accuracy by 0.6%. We demonstrate empirically that beam search based on the scores from the neural network does not work as well, perhaps because of the label bias problem. A second contribution of this work is an effective way to leverage unlabeled data and other parsers. Neural networks are known to perform very well in the presence of large amounts of training data. It is however unlikely that the amount of hand parsed data will increase significantly because of the high cost for syntactic annotations. To this end we generate large quantities of high-confidence parse trees by parsing an unlabeled corpus and selecting only the sentences on which two different parsers produced the same parse trees. This idea comes from tri-training (Li et al., 2014) and while applicable to other parsers as well, we show that it benefits neural network parsers more than models with discrete features. Adding 10 million automatically parsed tokens to the training data improves the accuracy of our parsers further by 0.7%. Our final greedy parser achieves an unlabeled attachment score (UAS) of 93.46% on the Penn Treebank test set, while a model with a beam of size 8 produces an UAS of 94.08% (section 4. To the best of our knowledge, these are some of the very best dependency accuracies on this corpus. We provide an extensive exploration of our model in section 5. In ablation experiments we tease apart our various contributions and modeling choices in order to shed some light on what matters in practice. Neural network representations have been used in structured models before (Peng et al., 2009; Do and Artires, 2010), and have also been used for syntactic parsing (Titov and Henderson, 2007; Titov and Henderson, 2010), alas with fairly complex architectures and constraints. Our work on the other hand introduces a general approach for structured perceptron training with a neural network representation and achieves stateof-the-art parsing results for English. 2 Neural Network Model In this section, we describe the architecture of our model, which is summarized in figure 2. Note that we separate the embedding processing to a distinct “embedding layer” for clarity of presentation. Our model is based upon that of Chen and Manning ROOT had The news had little effect . DT NN VBD JJ NN P Stack Buffer Partial annotations little effect . Feature extraction h0 = [XgEg | g 2 {word, tag, label}] Embedding Layer Configuration Hidden Layers P(y) / exp{β⊤ y hi + by}, Softmax Layer Perceptron Layer argmax d∈GEN(x) m X j=1 v(yj) · φ(x, cj) h2 = max{0, W2h1 + b2}, h1 = max{0, W1h0 + b1}, Figure 1: Schematic overview of our neural network model. Feature Groups si, bi i 2 {1, 2, 3, 4} All lc1(si), lc2(si) i 2 {1, 2} All rc1(si), rc2(si) i 2 {1, 2} All rc1(rc1(si)) i 2 {1, 2} All lc1(lc1(si)) i 2 {1, 2} All Table 1: Features used in the model. si and bi are elements on the stack and buffer, respectively. lci indicates i’th leftmost child and rci the i’th rightmost child. Features that are included in addition to those from Chen and Manning (2014) are marked with ?. Groups indicates which values were extracted from each feature location (e.g. words, tags, labels). (2014) and we discuss the differences between our model and theirs in detail at the end of this section. 2.1 Input layer Given a parse configuration c, we extract a rich set of discrete features which we feed into the neural network. Following Chen and Manning (2014), we group these features by their input source: words, POS tags, and arc labels. The full set of features is given in Table 2. The features extracted for each group are represented as a sparse F ⇥V matrix X, where V is the size of the vocabulary of the feature group and F is the number of features: the value of element Xfv is 1 if the f’th feature takes on value v. We produce three input matrices: Xword for words features, Xtag for POS tag features, and Xlabel for arc labels. For all feature groups, we add additional special … Figure 1: Schematic overview of our neural network model. Atomic features are extracted from the i’th elements on the stack (si) and the buffer (bi); lci indicates the i’th leftmost child and rci the i’th rightmost child. We use the top two elements on the stack for the arc features and the top four tokens on stack and buffer for words, tags and arc labels. tends this line of work to the setting of inexact search with beam decoding for dependency parsing; Zhou et al. (2015) concurrently explored a similar approach using a structured probabilistic ranking objective. Dyer et al. (2015) concurrently developed the Stack Long Short-Term Memory (S-LSTM) architecture, which does incorporate recurrent architecture and look-ahead, and which yields comparable accuracy on the Penn Treebank to our greedy model. 2 Neural Network Model In this section, we describe the architecture of our model, which is summarized in Figure 1. Note that we separate the embedding processing to a distinct “embedding layer” for clarity of presentation. Our model is based upon that of Chen and Manning (2014) and we discuss the differences between our model and theirs in detail at the end of this section. We use the arc-standard (Nivre, 2004) transition system. 2.1 Input layer Given a parse configuration c (consisting of a stack s and a buffer b), we extract a rich set of discrete features which we feed into the neural network. Following Chen and Manning (2014), we group these features by their input source: words, POS tags, and arc labels. The features extracted 324 for each group are represented as a sparse F × V matrix X, where V is the size of the vocabulary of the feature group and F is the number of features. The value of element X fv is 1 if the f’th feature takes on value v. We produce three input matrices: Xword for words features, Xtag for POS tag features, and Xlabel for arc labels, with Fword = Ftag = 20 and Flabel = 12 (Figure 1). For all feature groups, we add additional special values for “ROOT” (indicating the POS or word of the root token), “NULL” (indicating no valid feature value could be computed) or “UNK” (indicating an out-of-vocabulary item). 2.2 Embedding layer The first learned layer h0 in the network transforms the sparse, discrete features X into a dense, continuous embedded representation. For each feature group Xg, we learn a Vg × Dg embedding matrix Eg that applies the conversion: h0 = [XgEg | g ∈{word, tag, label}], (1) where we apply the computation separately for each group g and concatenate the results. Thus, the embedding layer has E = P g FgDg outputs, which we reshape to a vector h0. We can choose the embedding dimensionality D for each group freely. Since POS tags and arc labels have much smaller vocabularies, we show in our experiments (Section 5.1) that we can use smaller Dtag and Dlabel, without a loss in accuracy. 2.3 Hidden layers We experimented with one and two hidden layers composed of M rectified linear (Relu) units (Nair and Hinton, 2010). Each unit in the hidden layers is fully connected to the previous layer: hi = max{0, Wihi−1 + bi}, (2) where W1 is a M1 × E weight matrix for the first hidden layer and Wi are Mi × Mi−1 matrices for all subsequent layers. The weights bi are bias terms. Relu layers have been well studied in the neural network literature and have been shown to work well for a wide domain of problems (Krizhevsky et al., 2012; Zeiler et al., 2013). Through most of development, we kept Mi = 200, but we found that significantly increasing the number of hidden units improved our results for the final comparison. 2.4 Relationship to Chen and Manning (2014) Our model is clearly inspired by and based on the work of Chen and Manning (2014). There are a few structural differences: (1) we allow for much smaller embeddings of POS tags and labels, (2) we use Relu units in our hidden layers, and (3) we use a deeper model with two hidden layers. Somewhat to our surprise, we found these changes combined with an SGD training scheme (Section 3.1) during the “pre-training” phase of the model to lead to an almost 1% accuracy gain over Chen and Manning (2014). This trend held despite carefully tuning hyperparameters for each method of training and structure combination. Our main contribution from an algorithmic perspective is our training procedure: as described in the next section, we use the structured perceptron for learning the final layer of our model. We thus present a novel way to leverage a neural network representation in a structured prediction setting. 3 Semi-Supervised Structured Learning In this work, we investigate a semi-supervised structured learning scheme that yields substantial improvements in accuracy over the baseline neural network model. There are two complementary contributions of our approach: (1) incorporating structured learning of the model and (2) utilizing unlabeled data. In both cases, we use the neural network to model the probability of each parsing action y as a soft-max function taking the final hidden layer as its input: P(y) ∝exp{β⊤ y hi + by}, (3) where βy is a Mi dimensional vector of weights for class y and i is the index of the final hidden layer of the network. At a high level our approach can be summarized as follows: • First, we pre-train the network’s hidden representations by learning probabilities of parsing actions. Fixing the hidden representations, we learn an additional final output layer using the structured perceptron that uses the output of the network’s hidden layers. In practice this improves accuracy by ∼0.6% absolute. • Next, we show that we can supplement the gold data with a large corpus of high quality 325 automatic parses. We show that incorporating unlabeled data in this way improves accuracy by as much as 1% absolute. 3.1 Backpropagation Pretraining To learn the hidden representations, we use mini-batched averaged stochastic gradient descent (ASGD) (Bottou, 2010) with momentum (Hinton, 2012) to learn the parameters Θ of the network, where Θ = {Eg, Wi, bi, βy | ∀g, i, y}. We use backpropagation to minimize the multinomial logistic loss: L(Θ) = − X j log P(yj | cj, Θ) + λ X i ||Wi||2 2, (4) where λ is a regularization hyper-parameter over the hidden layer parameters (we use λ = 10−4 in all experiments) and j sums over all decisions and configurations {y j, cj} extracted from gold parse trees in the dataset. The specific update rule we apply at iteration t is as follows: gt = µgt−1 −∆L(Θt), (5) Θt+1 = Θt + ηtgt, (6) where the descent direction gt is computed by a weighted combination of the previous direction gt−1and the current gradient ∆L(Θt). The parameter µ ∈[0, 1) is the momentum parameter while ηt is the traditional learning rate. In addition, since we did not tune the regularization parameter λ, we apply a simple exponential step-wise decay to ηt; for every γ rounds of updates, we multiply ηt = 0.96ηt−1. The final component of the update is parameter averaging: we maintain averaged parameters ¯Θt = αt ¯Θt−1 + (1 −αt)Θt, where αt is an averaging weight that increases from 0.1 to 0.9999 with 1/t. Combined with averaging, careful tuning of the three hyperparameters µ, η0, and γ using heldout data was crucial in our experiments. 3.2 Structured Perceptron Training Given the hidden representations, we now describe how the perceptron can be trained to utilize these representations. The perceptron algorithm with early updates (Collins and Roark, 2004) requires a feature-vector definition φ that maps a sentence x together with a configuration c to a feature vector φ(x, c) ∈Rd. There is a one-to-one mapping between configurations c and decision sequences y1 . . . yj−1 for any integer j ≥1: we will use c and y1 . . . yj−1 interchangeably. For a sentence x, define GEN(x) to be the set of parse trees for x. Each y ∈GEN(x) is a sequence of decisions y1 . . . ym for some integer m. We use Y to denote the set of possible decisions in the parsing model. For each decision y ∈Y we assume a parameter vector v(y) ∈Rd. These parameters will be trained using the perceptron. In decoding with the perceptron-trained model, we will use beam search to attempt to find: argmax y∈GEN(x) m X j=1 v(yj) · φ(x, y1 . . . yj−1). Thus each decision yj receives a score: v(y j) · φ(x, y1 . . . yj−1). In the perceptron with early updates, the parameters v(y) are trained as follows. On each training example, we run beam search until the goldstandard parse tree falls out of the beam.1 Define j to be the length of the beam at this point. A structured perceptron update is performed using the gold-standard decisions y1 . . . yj as the target, and the highest scoring (incorrect) member of the beam as the negative example. A key idea in this paper is to use the neural network to define the representation φ(x, c). Given the sentence x and the configuration c, assuming two hidden layers, the neural network defines values for h1, h2, and P(y) for each decision y. We experimented with various definitions of φ (Section 5.2) and found that φ(x, c) = [h1 h2 P(y)] (the concatenation of the outputs from both hidden layers, as well as the probabilities for all decisions y possible in the current configuration) had the best accuracy on development data. Note that it is possible to continue to use backpropagation to learn the representation φ(x, c) during perceptron training; however, we found using ASGD to pre-train the representation always led to faster, more accurate results in preliminary experiments, and we left further investigation for future work. 3.3 Incorporating Unlabeled Data Given the high capacity, non-linear nature of the deep network we hypothesize that our model can 1If the gold parse tree stays within the beam until the end of the sentence, conventional perceptron updates are used. 326 be significantly improved by incorporating more data. One way to use unlabeled data is through unsupervised methods such as word clusters (Koo et al., 2008); we follow Chen and Manning (2014) and use pretrained word embeddings to initialize our model. The word embeddings capture similar distributional information as word clusters and give consistent improvements by providing a good initialization and information about words not seen in the treebank data. However, obtaining more training data is even more important than a good initialization. One potential way to obtain additional training data is by parsing unlabeled data with previously trained models. McClosky et al. (2006) and Huang and Harper (2009) showed that iteratively re-training a single model (“self-training”) can be used to improve parsers in certain settings; Petrov et al. (2010) built on this work and showed that a slow and accurate parser can be used to “up-train” a faster but less accurate parser. In this work, we adopt the “tri-training” approach of Li et al. (2014): Two parsers are used to process the unlabeled corpus and only sentences for which both parsers produced the same parse tree are added to the training data. The intuition behind this idea is that the chance of the parse being correct is much higher when the two parsers agree: there is only one way to be correct, while there are many possible incorrect parses. Of course, this reasoning holds only as long as the parsers suffer from different biases. We show that tri-training is far more effective than vanilla up-training for our neural network model. We use same setup as Li et al. (2014), intersecting the output of the BerkeleyParser (Petrov et al., 2006), and a reimplementation of ZPar (Zhang and Nivre, 2011) as our baseline parsers. The two parsers agree only 36% of the time on the tune set, but their accuracy on those sentences is 97.26% UAS, approaching the inter annotator agreement rate. These sentences are of course easier to parse, having an average length of 15 words, compared to 24 words for the tune set overall. However, because we only use these sentences to extract individual transition decisions, the shorter length does not seem to hurt their utility. We generate 107 tokens worth of new parses and use this data in the backpropagation stage of training. 4 Experiments In this section we present our experimental setup and the main results of our work. 4.1 Experimental Setup We conduct our experiments on two English language benchmarks: (1) the standard Wall Street Journal (WSJ) part of the Penn Treebank (Marcus et al., 1993) and (2) a more comprehensive union of publicly available treebanks spanning multiple domains. For the WSJ experiments, we follow standard practice and use sections 2-21 for training, section 22 for development and section 23 as the final test set. Since there are many hyperparameters in our models, we additionally use section 24 for tuning. We convert the constituency trees to Stanford style dependencies (De Marneffe et al., 2006) using version 3.3.0 of the converter. We use a CRF-based POS tagger to generate 5fold jack-knifed POS tags on the training set and predicted tags on the dev, test and tune sets; our tagger gets comparable accuracy to the Stanford POS tagger (Toutanova et al., 2003) with 97.44% on the test set. We report unlabeled attachment score (UAS) and labeled attachment score (LAS) excluding punctuation on predicted POS tags, as is standard for English. For the second set of experiments, we follow the same procedure as above, but with a more diverse dataset for training and evaluation. Following Vinyals et al. (2015), we use (in addition to the WSJ), the OntoNotes corpus version 5 (Hovy et al., 2006), the English Web Treebank (Petrov and McDonald, 2012), and the updated and corrected Question Treebank (Judge et al., 2006). We train on the union of each corpora’s training set and test on each domain separately. We refer to this setup as the “Treebank Union” setup. In our semi-supervised experiments, we use the corpus from Chelba et al. (2013) as our source of unlabeled data. We process it with the BerkeleyParser (Petrov et al., 2006), a latent variable constituency parser, and a reimplementation of ZPar (Zhang and Nivre, 2011), a transition-based parser with beam search. Both parsers are included as baselines in our evaluation. We select the first 107 tokens for which the two parsers agree as additional training data. For our tri-training experiments, we re-train the POS tagger using the POS tags assigned on the unlabeled data from the Berkeley constituency parser. This increases POS 327 Method UAS LAS Beam Graph-based Bohnet (2010) 92.88 90.71 n/a Martins et al. (2013) 92.89 90.55 n/a Zhang and McDonald (2014) 93.22 91.02 n/a Transition-based ⋆Zhang and Nivre (2011) 93.00 90.95 32 Bohnet and Kuhn (2012) 93.27 91.19 40 Chen and Manning (2014) 91.80 89.60 1 S-LSTM (Dyer et al., 2015) 93.20 90.90 1 Our Greedy 93.19 91.18 1 Our Perceptron 93.99 92.05 8 Tri-training ⋆Zhang and Nivre (2011) 92.92 90.88 32 Our Greedy 93.46 91.49 1 Our Perceptron 94.26 92.41 8 Table 1: Final WSJ test set results. We compare our system to state-of-the-art graph-based and transition-based dependency parsers. ⋆denotes our own re-implementation of the system so we could compare tri-training on a competitive baseline. All methods except Chen and Manning (2014) and Dyer et al. (2015) were run using predicted tags from our POS tagger. For reference, the accuracy of the Berkeley constituency parser (after conversion) is 93.61% UAS / 91.51% LAS. accuracy slightly to 97.57% on the WSJ. 4.2 Model Initialization & Hyperparameters In all cases, we initialized Wi and β randomly using a Gaussian distribution with variance 10−4. We used fixed initialization with bi = 0.2, to ensure that most Relu units are activated during the initial rounds of training. We did not systematically compare this random scheme to others, but we found that it was sufficient for our purposes. For the word embedding matrix Eword, we initialized the parameters using pretrained word embeddings. We used the publicly available word2vec2 tool (Mikolov et al., 2013) to learn CBOW embeddings following the sample configuration provided with the tool. For words not appearing in the unsupervised data and the special “NULL” etc. tokens, we used random initialization. In preliminary experiments we found no difference between training the word embeddings on 1 billion or 10 billion tokens. We therefore trained the word embeddings on the same corpus we used for tri-training (Chelba et al., 2013). We set Dword = 64 and Dtag = Dlabel = 32 for embedding dimensions and M1 = M2 = 2048 hidden units in our final experiments. For the percep2http://code.google.com/p/word2vec/ Method News Web QTB Graph-based Bohnet (2010) 91.38 85.22 91.49 Martins et al. (2013) 91.13 85.04 91.54 Zhang and McDonald (2014) 91.48 85.59 90.69 Transition-based ⋆Zhang and Nivre (2011) 91.15 85.24 92.46 Bohnet and Kuhn (2012) 91.69 85.33 92.21 Our Greedy 91.21 85.41 90.61 Our Perceptron (B=16) 92.25 86.44 92.06 Tri-training ⋆Zhang and Nivre (2011) 91.46 85.51 91.36 Our Greedy 91.82 86.37 90.58 Our Perceptron (B=16) 92.62 87.00 93.05 Table 2: Final Treebank Union test set results. We report LAS only for brevity; see Appendix for full results. For these tri-training results, we sampled sentences to ensure the distribution of sentence lengths matched the distribution in the training set, which we found marginally improved the ZPar tri-training performance. For reference, the accuracy of the Berkeley constituency parser (after conversion) is 91.66% WSJ, 85.93% Web, and 93.45% QTB. tron layer, we used φ(x, c) = [h1 h2 P(y)] (concatenation of all intermediate layers). All hyperparameters (including structure) were tuned using Section 24 of the WSJ only. When not tri-training, we used hyperparameters of γ = 0.2, η0 = 0.05, µ = 0.9, early stopping after roughly 16 hours of training time. With the tri-training data, we decreased η0 = 0.05, increased γ = 0.5, and decreased the size of the network to M1 = 1024, M2 = 256 for run-time efficiency, and trained the network for approximately 4 days. For the Treebank Union setup, we set M1 = M2 = 1024 for the standard training set and for the tri-training setup. 4.3 Results Table 1 shows our final results on the WSJ test set, and Table 2 shows the cross-domain results from the Treebank Union. We compare to the best dependency parsers in the literature. For (Chen and Manning, 2014) and (Dyer et al., 2015), we use reported results; the other baselines were run by Bernd Bohnet using version 3.3.0 of the Stanford dependencies and our predicted POS tags for all datasets to make comparisons as fair as possible. On the WSJ and Web tasks, our parser outperforms all dependency parsers in our comparison by a substantial margin. The Question (QTB) dataset is more sensitive to the smaller beam size we use in order to train the models in a reasonable time; if we increase to B = 32 at inference 328 time only, our perceptron performance goes up to 92.29% LAS. Since many of the baselines could not be directly compared to our semi-supervised approach, we re-implemented Zhang and Nivre (2011) and trained on the tri-training corpus. Although tritraining did help the baseline on the dev set (Figure 4), test set performance did not improve significantly. In contrast, it is quite exciting to see that after tri-training, even our greedy parser is more accurate than any of the baseline dependency parsers and competitive with the BerkeleyParser used to generate the tri-training data. As expected, tri-training helps most dramatically to increase accuracy on the Treebank Union setup with diverse domains, yielding 0.4-1.0% absolute LAS improvement gains for our most accurate model. Unfortunately we are not able to compare to several semi-supervised dependency parsers that achieve some of the highest reported accuracies on the WSJ, in particular Suzuki et al. (2009), Suzuki et al. (2011) and Chen et al. (2013). These parsers use the Yamada and Matsumoto (2003) dependency conversion and the accuracies are therefore not directly comparable. The highest of these is Suzuki et al. (2011), with a reported accuracy of 94.22% UAS. Even though the UAS is not directly comparable, it is typically similar, and this suggests that our model is competitive with some of the highest reported accuries for dependencies on WSJ. 5 Discussion In this section, we investigate the contribution of the various components of our approach through ablation studies and other systematic experiments. We tune on Section 24, and use Section 22 for comparisons in order to not pollute the official test set (Section 23). We focus on UAS as we found the LAS scores to be strongly correlated. Unless otherwise specified, we use 200 hidden units in each layer to be able to run more ablative experiments in a reasonable amount of time. 5.1 Impact of Network Structure In addition to initialization and hyperparameter tuning, there are several additional choices about model structure and size a practitioner faces when implementing a neural network model. We explore these questions and justify the particular choices we use in the following. Note that we do 91.2 91.4 91.6 91.8 92 92 92.1 92.2 92.3 92.4 92.5 92.6 92.7 UAS (%) on WSJ Tune Set UAS (%) on WSJ Dev Set Variance of Networks on Tuning/Dev Set Pretrained 200x200 Pretrained 200 200x200 200 Figure 2: Effect of hidden layers and pre-training on variance of random restarts. Initialization was either completely random or initialized with word2vec embeddings (“Pretrained”), and either one or two hidden layers of size 200 were used (“200” vs “200x200”). Each point represents maximization over a small hyperparameter grid with early stopping based on WSJ tune set UAS score. Dword = 64, Dtag, Dlabel = 16. not use a beam for this analysis and therefore do not train the final perceptron layer. This is done in order to reduce training times and because the trends persist across settings. Variance reduction with pre-trained embeddings. Since the learning problem is nonconvex, different initializations of the parameters yield different solutions to the learning problem. Thus, for any given experiment, we ran multiple random restarts for every setting of our hyperparameters and picked the model that performed best using the held-out tune set. We found it important to allow the model to stop training early if tune set accuracy decreased. We visualize the performance of 32 random restarts with one or two hidden layers and with and without pretrained word embeddings in Figure 2, and a summary of the figure in Table 3. While adding a second hidden layer results in a large gain on the tune set, there is no gain on the dev set if pre-trained embeddings are not used. In fact, while the overall UAS scores of the tune set and dev set are strongly correlated (ρ = 0.64, p < 10−10), they are not significantly correlated if pre-trained embeddings are not used (ρ = 0.12, p > 0.3). This suggests that an additional benefit of pre-trained embeddings, aside from allowing learning to reach a more accurate solution, is to push learning towards a solution that generalizes to more data. 329 Pre Hidden WSJ §24 (Max) WSJ §22 Y 200 × 200 92.10 ± 0.11 92.58 ±0.12 Y 200 91.76 ± 0.09 92.30 ± 0.10 N 200 × 200 91.84 ± 0.11 92.19 ± 0.13 N 200 91.55 ± 0.10 92.20 ± 0.12 Table 3: Impact of network architecture on UAS for greedy inference. We select the best model from 32 random restarts based on the tune set and show the resulting dev set accuracy. We also show the standard deviation across the 32 restarts. # Hidden 64 128 256 512 1024 2048 1 Layer 91.73 92.27 92.48 92.73 92.74 92.83 2 Layers 91.89 92.40 92.71 92.70 92.96 93.13 Table 4: Increasing hidden layer size increases WSJ Dev UAS. Shown is the average WSJ Dev UAS across hyperparameter tuning and early stopping with 3 random restarts with a greedy model. Diminishing returns with increasing embedding dimensions. For these experiments, we fixed one embedding type to a high value and reduced the dimensionality of all others to very small values. The results are plotted in Figure 3, suggesting larger embeddings do not significantly improve results. We also ran tri-training on a very compact model with Dword = 8 and Dtag = Dlabel = 2 (8× fewer parameters than our full model) which resulted in 92.33% UAS accuracy on the dev set. This is comparable to the full model without tri-training, suggesting that more training data can compensate for fewer parameters. Increasing hidden units yields large gains. For these experiments, we fixed the embedding sizes Dword = 64, Dtag = Dlabel = 32 and tried increasing and decreasing the dimensionality of the hidden layers on a logarthmic scale. Improvements in accuracy did not appear to saturate even with increasing the number of hidden units by an order of magnitude, though the network became too slow to train effectively past M = 2048. These results suggest that there are still gains to be made by increasing the efficiency of larger networks, even for greedy shift-reduce parsers. 5.2 Impact of Structured Perceptron We now turn our attention to the importance of structured perceptron training as well as the impact of different latent representations. Bias reduction through structured training. To evaluate the impact of structured training, we Beam 1 2 4 8 16 32 WSJ Only ZN’11 90.55 91.36 92.54 92.62 92.88 93.09 Softmax 92.74 93.07 93.16 93.25 93.24 93.24 Perceptron 92.73 93.06 93.40 93.47 93.50 93.58 Tri-training ZN’11 91.65 92.37 93.37 93.24 93.21 93.18 Softmax 93.71 93.82 93.86 93.87 93.87 93.87 Perceptron 93.69 94.00 94.23 94.33 94.31 94.32 Table 5: Beam search always yields significant gains but using perceptron training provides even larger benefits, especially for the tri-trained neural network model. The best result for each model is highlighted in bold. φ(x, c) WSJ Only Tri-training [h2] 93.16 93.93 [P(y)] 93.26 93.80 [h1 h2] 93.33 93.95 [h1 h2 P(y)] 93.47 94.33 Table 6: Utilizing all intermediate representations improves performance on the WSJ dev set. All results are with B = 8. compare using the estimates P(y) from the neural network directly for beam search to using the activations from all layers as features in the structured perceptron. Using the probability estimates directly is very similar to Ratnaparkhi (1997), where a maximum-entropy model was used to model the distribution over possible actions at each parser state, and beam search was used to search for the highest probability parse. A known problem with beam search in this setting is the label-bias problem. Table 5 shows the impact of using structured perceptron training over using the softmax function during beam search as a function of the beam size used. For reference, our reimplementation of Zhang and Nivre (2011) is trained equivalently for each setting. We also show the impact on beam size when tri-training is used. Although the beam does marginally improve accuracy for the softmax model, much greater gains are achieved when perceptron training is used. Using all hidden layers crucial for structured perceptron. We also investigated the impact of connecting the final perceptron layer to all prior hidden layers (Table 6). Our results suggest that all intermediate layers of the network are indeed discriminative. Nonetheless, aggregating all of their activations proved to be the most effective representation for the structured perceptron. This suggests that the representations learned by the network collectively contain the information re330 1 2 4 8 16 32 64 128 89.5 90 90.5 91 91.5 92 Word Embedding Dimension (Dwords) UAS (%) Word Tuning on WSJ (Tune Set, Dpos,Dlabels=32) Pretrained 200x200 Pretrained 200 200x200 200 1 2 4 8 16 32 90.5 91 91.5 92 POS/Label Embedding Dimension (Dpos,Dlabels) UAS (%) POS/Label Tuning on WSJ (Tune Set, Dwords=64) Pretrained 200x200 Pretrained 200 200x200 200 Figure 3: Effect of embedding dimensions on the WSJ tune set. quired to reduce the bias of the model, but not when filtered through the softmax layer. Finally, we also experimented with connecting both hidden layers to the softmax layer during backpropagation training, but we found this did not significantly affect the performance of the greedy model. 5.3 Impact of Tri-Training To evaluate the impact of the tri-training approach, we compared to up-training with the BerkelyParser (Petrov et al., 2006) alone. The results are summarized in Figure 4 for the greedy and perceptron neural net models as well as our reimplementated Zhang and Nivre (2011) baseline. For our neural network model, training on the output of the BerkeleyParser yields only modest gains, while training on the data where the two parsers agree produces significantly better results. This was especially pronounced for the greedy models: after tri-training, the greedy neural network model surpasses the BerkeleyParser in accuracy. It is also interesting to note that up-training improved results far more than tri-training for the baseline. We speculate that this is due to the a lack of diversity in the tri-training data for this model, since the same baseline model was intersected with the BerkeleyParser to generate the tritraining data. 5.4 Error Analysis Regardless of tri-training, using the structured perceptron improved error rates on some of the common and difficult labels: ROOT, ccomp, cc, conj, and nsubj all improved by >1%. We inspected the learned perceptron weights v for the softmax probabilities P(y) (see Appendix) and found that the perceptron reweights the softmax probabilities based on common confusions; e.g. a strong negative weight for the action RIGHT(ccomp) given the softmax model outputs RIGHT(conj). Note ZN’11 (B=1) ZN’11 (B=32) Ours (B=1) Ours (B=8) 90 91 92 93 94 95 Semi−supervised Training (WSJ Dev Set) Base Up Tri Berkeley Figure 4: Semi-supervised training with 107 additional tokens, showing that tri-training gives significant improvements over up-training for our neural net model. that this trend did not hold when φ(x, c) = [P(y)]; without the hidden layer, the perceptron was not able to reweight the softmax probabilities to account for the greedy model’s biases. 6 Conclusion We presented a new state of the art in dependency parsing: a transition-based neural network parser trained with the structured perceptron and ASGD. We then combined this approach with unlabeled data and tri-training to further push state-of-the-art in semi-supervised dependency parsing. Nonetheless, our ablative analysis suggests that further gains are possible simply by scaling up our system to even larger representations. In future work, we will apply our method to other languages, explore end-to-end training of the system using structured learning, and scale up the method to larger datasets and network structures. Acknowledgements We would like to thank Bernd Bohnet for training his parsers and TurboParser on our setup. This paper benefitted tremendously from discussions with Ryan McDonald, Greg Coppola, Emily Pitler and Fernando Pereira. Finally, we are grateful to all members of the Google Parsing Team. 331 References Bernd Bohnet and Jonas Kuhn. 2012. The best of both worlds: a graph-based completion model for transition-based parsers. In Proc. EACL, pages 77– 87. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proc. 23rd International Conference on Computational Linguistics (Coling 2010), pages 89–97. L´eon Bottou. 2010. Large-scale machine learning with stochastic gradient descent. In Proc. COMPSTAT, pages 177–186. Sabine Buchholz and Erwin Marsi. 2006. Conll-x shared task on multilingual dependency parsing. In Proc. CoNLL, pages 149–164. Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. 2013. One billion word benchmark for measuring progress in statistical language modeling. CoRR. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proc. EMNLP, pages 740–750. Wenliang Chen, Min Zhang, and Yue Zhang. 2013. Semi-supervised feature transformation for dependency parsing. In Proc. 2013 EMNLP, pages 1303– 1313. Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proc. ACL, Main Volume, pages 111–118, Barcelona, Spain. Marie-Catherine De Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. LREC, pages 449–454. Trinh Minh Tri Do and Thierry Artires. 2010. Neural conditional random fields. In AISTATS, volume 9, pages 177–184. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transitionbased dependency parsing with stack long shortterm memory. In Proc. ACL. James Henderson. 2004. Discriminative training of a neural network statistical parser. In Proc. ACL, Main Volume, pages 95–102. Geoffrey E. Hinton. 2012. A practical guide to training restricted boltzmann machines. In Neural Networks: Tricks of the Trade (2nd ed.), Lecture Notes in Computer Science, pages 599–619. Springer. Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. Ontonotes: The 90% solution. In Proc. HLT-NAACL, pages 57– 60. Zhongqiang Huang and Mary Harper. 2009. Selftraining PCFG grammars with latent annotations across languages. In Proc. 2009 EMNLP, pages 832–841, Singapore. John Judge, Aoife Cahill, and Josef van Genabith. 2006. Questionbank: Creating a corpus of parseannotated questions. In Proc. ACL, pages 497–504. Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Proc. ACL-HLT, pages 595–603. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In Proc. NIPS, pages 1097–1105. Zhenghua Li, Min Zhang, and Wenliang Chen. 2014. Ambiguity-aware ensemble training for semisupervised dependency parsing. In Proc. ACL, pages 457–467. Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313–330. Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proc. ACL, pages 617– 622. David McClosky, Eugene Charniak, and Mark Johnson. 2006. Effective self-training for parsing. In Proc. HLT-NAACL, pages 152–159. Ryan McDonald, Joakim Nivre, Yvonne QuirmbachBrundage, Yoav Goldberg, Dipanjan Das, Kuzman Ganchev, Keith Hall, Slav Petrov, Hao Zhang, Oscar T¨ackstr¨om, Claudia Bedini, N´uria Bertomeu Castell´o, and Jungmee Lee. 2013. Universal dependency annotation for multilingual parsing. In Proc. ACL, pages 92–97. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Vinod Nair and Geoffrey E. Hinton. 2010. Rectified linear units improve restricted boltzmann machines. In Proc. 27th ICML, pages 807–814. Joakim Nivre, Johan Hall, Sandra K¨ubler, Ryan McDonald, Jens Nilsson, Sebastian Riedel, and Deniz Yuret. 2007. The CoNLL 2007 shared task on dependency parsing. In Proc. CoNLL, pages 915–932. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proc. ACL Workshop on Incremental Parsing, pages 50–57. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513–553. 332 Jian Peng, Liefeng Bo, and Jinbo Xu. 2009. Conditional neural fields. In Proc. NIPS, pages 1419– 1427. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. Notes of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proc. ACL, pages 433– 440. Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deterministic question parsing. In Proc. EMNLP, pages 705–713. Adwait Ratnaparkhi. 1997. A linear observed time statistical parser based on maximum entropy models. In Proc. EMNLP, pages 1–10. Jun Suzuki, Hideki Isozaki, Xavier Carreras, and Michael Collins. 2009. An empirical study of semisupervised structured conditional models for dependency parsing. In Proc. 2009 EMNLP, pages 551– 560. Jun Suzuki, Hideki Isozaki, and Masaaki Nagata. 2011. Learning condensed feature representations from large unsupervised data sets for supervised learning. In Proc. ACL-HLT, pages 636–641. Ivan Titov and James Henderson. 2007. Fast and robust multilingual dependency parsing with a generative latent variable model. In Proc. EMNLP, pages 947–951. Ivan Titov and James Henderson. 2010. A latent variable model for generative dependency parsing. In Trends in Parsing Technology, pages 35–55. Springer. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In NAACL. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey E. Hinton. 2015. Grammar as a foreign language. arXiv:1412.7449. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proc. IWPT, pages 195–206. Matthew D. Zeiler, Marc’Aurelio Ranzato, Rajat Monga, Mark Z. Mao, K. Yang, Quoc Viet Le, Patrick Nguyen, Andrew W. Senior, Vincent Vanhoucke, Jeffrey Dean, and Geoffrey E. Hinton. 2013. On rectified linear units for speech processing. In IEEE International Conference on Acoustics, Speech and Signal Processing, pages 3517–3521. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proc. EMNLP, pages 562–571. Hao Zhang and Ryan McDonald. 2014. Enforcing structural diversity in cube-pruned dependency parsing. In Proc. ACL, pages 656–661. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proc. ACL-HLT, pages 188–193. Hao Zhou, Yue Zhang, and Jiajun Chen. 2015. A neural probabilistic structured-prediction model for transition-based dependency parsing. In Proc. ACL. 333
2015
32
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 334–343, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Transition-Based Dependency Parsing with Stack Long Short-Term Memory Chris Dyer♣♠Miguel Ballesteros♦♠Wang Ling♠Austin Matthews♠Noah A. Smith♠ ♣Marianas Labs ♦NLP Group, Pompeu Fabra University ♠Carnegie Mellon University [email protected], [email protected], {lingwang,austinma,nasmith}@cs.cmu.edu Abstract We propose a technique for learning representations of parser states in transitionbased dependency parsers. Our primary innovation is a new control structure for sequence-to-sequence neural networks— the stack LSTM. Like the conventional stack data structures used in transitionbased parsing, elements can be pushed to or popped from the top of the stack in constant time, but, in addition, an LSTM maintains a continuous space embedding of the stack contents. This lets us formulate an efficient parsing model that captures three facets of a parser’s state: (i) unbounded look-ahead into the buffer of incoming words, (ii) the complete history of actions taken by the parser, and (iii) the complete contents of the stack of partially built tree fragments, including their internal structures. Standard backpropagation techniques are used for training and yield state-of-the-art parsing performance. 1 Introduction Transition-based dependency parsing formalizes the parsing problem as a series of decisions that read words sequentially from a buffer and combine them incrementally into syntactic structures (Yamada and Matsumoto, 2003; Nivre, 2003; Nivre, 2004). This formalization is attractive since the number of operations required to build any projective parse tree is linear in the length of the sentence, making transition-based parsing computationally efficient relative to graph- and grammarbased formalisms. The challenge in transitionbased parsing is modeling which action should be taken in each of the unboundedly many states encountered as the parser progresses. This challenge has been addressed by development of alternative transition sets that simplify the modeling problem by making better attachment decisions (Nivre, 2007; Nivre, 2008; Nivre, 2009; Choi and McCallum, 2013; Bohnet and Nivre, 2012), through feature engineering (Zhang and Nivre, 2011; Ballesteros and Nivre, 2014; Chen et al., 2014; Ballesteros and Bohnet, 2014) and more recently using neural networks (Chen and Manning, 2014; Stenetorp, 2013). We extend this last line of work by learning representations of the parser state that are sensitive to the complete contents of the parser’s state: that is, the complete input buffer, the complete history of parser actions, and the complete contents of the stack of partially constructed syntactic structures. This “global” sensitivity to the state contrasts with previous work in transitionbased dependency parsing that uses only a narrow view of the parsing state when constructing representations (e.g., just the next few incoming words, the head words of the top few positions in the stack, etc.). Although our parser integrates large amounts of information, the representation used for prediction at each time step is constructed incrementally, and therefore parsing and training time remain linear in the length of the input sentence. The technical innovation that lets us do this is a variation of recurrent neural networks with long short-term memory units (LSTMs) which we call stack LSTMs (§2), and which support both reading (pushing) and “forgetting” (popping) inputs. Our parsing model uses three stack LSTMs: one representing the input, one representing the stack of partial syntactic trees, and one representing the history of parse actions to encode parser states (§3). Since the stack of partial syntactic trees may contain both individual tokens and partial syntactic structures, representations of individual tree fragments are computed compositionally with recursive (i.e., similar to Socher et al., 2014) neural networks. The parameters are learned with backpropagation (§4), and we obtain state-of-the-art results on Chinese and English dependency parsing tasks (§5). 334 2 Stack LSTMs In this section we provide a brief review of LSTMs (§2.1) and then define stack LSTMs (§2.2). Notation. We follow the convention that vectors are written with lowercase, boldface letters (e.g., v or vw); matrices are written with uppercase, boldface letters (e.g., M, Ma, or Mab), and scalars are written as lowercase letters (e.g., s or qz). Structured objects such as sequences of discrete symbols are written with lowercase, bold, italic letters (e.g., w refers to a sequence of input words). Discussion of dimensionality is deferred to the experiments section below (§5). 2.1 Long Short-Term Memories LSTMs are a variant of recurrent neural networks (RNNs) designed to cope with the vanishing gradient problem inherent in RNNs (Hochreiter and Schmidhuber, 1997; Graves, 2013). RNNs read a vector xt at each time step and compute a new (hidden) state ht by applying a linear map to the concatenation of the previous time step’s state ht−1 and the input, and passing this through a logistic sigmoid nonlinearity. Although RNNs can, in principle, model long-range dependencies, training them is difficult in practice since the repeated application of a squashing nonlinearity at each step results in an exponential decay in the error signal through time. LSTMs address this with an extra memory “cell” (ct) that is constructed as a linear combination of the previous state and signal from the input. LSTM cells process inputs with three multiplicative gates which control what proportion of the current input to pass into the memory cell (it) and what proportion of the previous memory cell to “forget” (ft). The updated value of the memory cell after an input xt is computed as follows: it = σ(Wixxt + Wihht−1 + Wicct−1 + bi) ft = σ(Wfxxt + Wfhht−1 + Wfcct−1 + bf) ct = ft ⊙ct−1+ it ⊙tanh(Wcxxt + Wchht−1 + bc), where σ is the component-wise logistic sigmoid function, and ⊙is the component-wise (Hadamard) product. The value ht of the LSTM at each time step is controlled by a third gate (ot) that is applied to the result of the application of a nonlinearity to the memory cell contents: ot = σ(Woxxt + Wohht−1 + Wocct + bo) ht = ot ⊙tanh(ct). To improve the representational capacity of LSTMs (and RNNs generally), LSTMs can be stacked in “layers” (Pascanu et al., 2014). In these architectures, the input LSTM at higher layers at time t is the value of ht computed by the lower layer (and xt is the input at the lowest layer). Finally, output is produced at each time step from the ht value at the top layer: yt = g(ht), where g is an arbitrary differentiable function. 2.2 Stack Long Short-Term Memories Conventional LSTMs model sequences in a leftto-right order.1 Our innovation here is to augment the LSTM with a “stack pointer.” Like a conventional LSTM, new inputs are always added in the right-most position, but in stack LSTMs, the current location of the stack pointer determines which cell in the LSTM provides ct−1 and ht−1 when computing the new memory cell contents. In addition to adding elements to the end of the sequence, the stack LSTM provides a pop operation which moves the stack pointer to the previous element (i.e., the previous element that was extended, not necessarily the right-most element). Thus, the LSTM can be understood as a stack implemented so that contents are never overwritten, that is, push always adds a new entry at the end of the list that contains a back-pointer to the previous top, and pop only updates the stack pointer.2 This control structure is schematized in Figure 1. By querying the output vector to which the stack pointer points (i.e., the hTOP), a continuous-space “summary” of the contents of the current stack configuration is available. We refer to this value as the “stack summary.” What does the stack summary look like? Intuitively, elements near the top of the stack will 1Ours is not the first deviation from a strict left-toright order: previous variations include bidirectional LSTMs (Graves and Schmidhuber, 2005) and multidimensional LSTMs (Graves et al., 2007). 2Goldberg et al. (2013) propose a similar stack construction to prevent stack operations from invalidating existing references to the stack in a beam-search parser that must (efficiently) maintain a priority queue of stacks. 335 ; x1 y0 y1 ; x1 y0 y1 TOP pop ; x1 y0 y1 TOP TOP push y2 x2 Figure 1: A stack LSTM extends a conventional left-to-right LSTM with the addition of a stack pointer (notated as TOP in the figure). This figure shows three configurations: a stack with a single element (left), the result of a pop operation to this (middle), and then the result of applying a push operation (right). The boxes in the lowest rows represent stack contents, which are the inputs to the LSTM, the upper rows are the outputs of the LSTM (in this paper, only the output pointed to by TOP is ever accessed), and the middle rows are the memory cells (the ct’s and ht’s) and gates. Arrows represent function applications (usually affine transformations followed by a nonlinearity), refer to §2.1 for specifics. influence the representation of the stack. However, the LSTM has the flexibility to learn to extract information from arbitrary points in the stack (Hochreiter and Schmidhuber, 1997). Although this architecture is to the best of our knowledge novel, it is reminiscent of the Recurrent Neural Network Pushdown Automaton (NNPDA) of Das et al. (1992), which added an external stack memory to an RNN. However, our architecture provides an embedding of the complete contents of the stack, whereas theirs made only the top of the stack visible to the RNN. 3 Dependency Parser We now turn to the problem of learning representations of dependency parsers. We preserve the standard data structures of a transition-based dependency parser, namely a buffer of words (B) to be processed and a stack (S) of partially constructed syntactic elements. Each stack element is augmented with a continuous-space vector embedding representing a word and, in the case of S, any of its syntactic dependents. Additionally, we introduce a third stack (A) to represent the history of actions taken by the parser.3 Each of these stacks is associated with a stack LSTM that provides an encoding of their current contents. The full architecture is illustrated in Figure 3, and we will review each of the components in turn. 3The A stack is only ever pushed to; our use of a stack here is purely for implementational and expository convenience. 3.1 Parser Operation The dependency parser is initialized by pushing the words and their representations (we discuss word representations below in §3.3) of the input sentence in reverse order onto B such that the first word is at the top of B and the ROOT symbol is at the bottom, and S and A each contain an emptystack token. At each time step, the parser computes a composite representation of the stack states (as determined by the current configurations of B, S, and A) and uses that to predict an action to take, which updates the stacks. Processing completes when B is empty (except for the empty-stack symbol), S contains two elements, one representing the full parse tree headed by the ROOT symbol and the other the empty-stack symbol, and A is the history of operations taken by the parser. The parser state representation at time t, which we write pt, which is used to is determine the transition to take, is defined as follows: pt = max {0, W[st; bt; at] + d} , where W is a learned parameter matrix, bt is the stack LSTM encoding of the input buffer B, st is the stack LSTM encoding of S, at is the stack LSTM encoding of A, d is a bias term, then passed through a component-wise rectified linear unit (ReLU) nonlinearity (Glorot et al., 2011).4 Finally, the parser state pt is used to compute 4In preliminary experiments, we tried several nonlinearities and found ReLU to work slightly better than the others. 336 overhasty an decision was amod REDUCE-LEFT(amod) SHIFT | {z } | {z } | {z } … SHIFT RED-L(amod) … made S B A ; ; pt root TOP TOP TOP Figure 2: Parser state computation encountered while parsing the sentence “an overhasty decision was made.” Here S designates the stack of partially constructed dependency subtrees and its LSTM encoding; B is the buffer of words remaining to be processed and its LSTM encoding; and A is the stack representing the history of actions taken by the parser. These are linearly transformed, passed through a ReLU nonlinearity to produce the parser state embedding pt. An affine transformation of this embedding is passed to a softmax layer to give a distribution over parsing decisions that can be taken. the probability of the parser action at time t as: p(zt | pt) = exp g⊤ ztpt + qzt  P z′∈A(S,B) exp g⊤ z′pt + qz′, where gz is a column vector representing the (output) embedding of the parser action z, and qz is a bias term for action z. The set A(S, B) represents the valid actions that may be taken given the current contents of the stack and buffer.5 Since pt = f(st, bt, at) encodes information about all previous decisions made by the parser, the chain rule may be invoked to write the probability of any valid sequence of parse actions z conditional on the input as: p(z | w) = |z| Y t=1 p(zt | pt). (1) 3.2 Transition Operations Our parser is based on the arc-standard transition inventory (Nivre, 2004), given in Figure 3. 5In general, A(S, B) is the complete set of parser actions discussed in §3.2, but in some cases not all actions are available. For example, when S is empty and words remain in B, a SHIFT operation is obligatory (Sartorio et al., 2013). Why arc-standard? Arc-standard transitions parse a sentence from left to right, using a stack to store partially built syntactic structures and a buffer that keeps the incoming tokens to be parsed. The parsing algorithm chooses an action at each configuration by means of a score. In arc-standard parsing, the dependency tree is constructed bottom-up, because right-dependents of a head are only attached after the subtree under the dependent is fully parsed. Since our parser recursively computes representations of tree fragments, this construction order guarantees that once a syntactic structure has been used to modify a head, the algorithm will not try to find another head for the dependent structure. This means we can evaluate composed representations of tree fragments incrementally; we discuss our strategy for this below (§3.4). 3.3 Token Embeddings and OOVs To represent each input token, we concatenate three vectors: a learned vector representation for each word type (w); a fixed vector representation from a neural language model ( ˜wLM), and a learned representation (t) of the POS tag of the token, provided as auxiliary input to the parser. A 337 Stackt Buffert Action Stackt+1 Buffert+1 Dependency (u, u), (v, v), S B REDUCE-RIGHT(r) (gr(u, v), u), S B u r→v (u, u), (v, v), S B REDUCE-LEFT(r) (gr(v, u), v), S B u r←v S (u, u), B SHIFT (u, u), S B — Figure 3: Parser transitions indicating the action applied to the stack and buffer and the resulting stack and buffer states. Bold symbols indicate (learned) embeddings of words and relations, script symbols indicate the corresponding words and relations. linear map (V) is applied to the resulting vector and passed through a component-wise ReLU, x = max {0, V[w; ˜wLM; t] + b} . This mapping can be shown schematically as in Figure 4. overhasty JJ UNK decision NN decision x2 x3 t2 t3 w2 ˜wLM 2 ˜wLM 3 w3 Figure 4: Token embedding of the words decision, which is present in both the parser’s training data and the language model data, and overhasty, an adjective that is not present in the parser’s training data but is present in the LM data. This architecture lets us deal flexibly with outof-vocabulary words—both those that are OOV in both the very limited parsing data but present in the pretraining LM, and words that are OOV in both. To ensure we have estimates of the OOVs in the parsing training data, we stochastically replace (with p = 0.5) each singleton word type in the parsing training data with the UNK token in each training iteration. Pretrained word embeddings. A veritable cottage industry exists for creating word embeddings, meaning numerous pretraining options for ˜wLM are available. However, for syntax modeling problems, embedding approaches which discard order perform less well (Bansal et al., 2014); therefore we used a variant of the skip n-gram model introduced by Ling et al. (2015), named “structured skip n-gram,” where a different set of parameters is used to predict each context word depending on its position relative to the target word. The hyperparameters of the model are the same as in the skip n-gram model defined in word2vec (Mikolov et al., 2013), and we set the window size to 5, used a negative sampling rate to 10, and ran 5 epochs through unannotated corpora described in §5.1. 3.4 Composition Functions Recursive neural network models enable complex phrases to be represented compositionally in terms of their parts and the relations that link them (Socher et al., 2011; Socher et al., 2013c; Hermann and Blunsom, 2013; Socher et al., 2013b). We follow this previous line of work in embedding dependency tree fragments that are present in the stack S in the same vector space as the token embeddings discussed above. A particular challenge here is that a syntactic head may, in general, have an arbitrary number of dependents. To simplify the parameterization of our composition function, we combine headmodifier pairs one at a time, building up more complicated structures in the order they are “reduced” in the parser, as illustrated in Figure 5. Each node in this expanded syntactic tree has a value computed as a function of its three arguments: the syntactic head (h), the dependent (d), and the syntactic relation being satisfied (r). We define this by concatenating the vector embeddings of the head, dependent and relation, applying a linear operator and a component-wise nonlinearity as follows: c = tanh (U[h; d; r] + e) . For the relation vector, we use an embedding of the parser action that was applied to construct the relation (i.e., the syntactic relation paired with the direction of attachment). 4 Training Procedure We trained our parser to maximize the conditional log-likelihood (Eq. 1) of treebank parses given sentences. Our implementation constructs a computation graph for each sentence and runs forwardand backpropagation to obtain the gradients of this 338 decision overhasty an det overhasty decision an c mod head head mod amod amod c1 rel c2 det rel Figure 5: The representation of a dependency subtree (above) is computed by recursively applying composition functions to ⟨head, modifier, relation⟩triples. In the case of multiple dependents of a single head, the recursive branching order is imposed by the order of the parser’s reduce operations (below). objective with respect to the model parameters. The computations for a single parsing model were run on a single thread on a CPU. Using the dimensions discussed in the next section, we required between 8 and 12 hours to reach convergence on a held-out dev set.6 Parameter optimization was performed using stochastic gradient descent with an initial learning rate of η0 = 0.1, and the learning rate was updated on each pass through the training data as ηt = η0/(1 + ρt), with ρ = 0.1 and where t is the number of epochs completed. No momentum was used. To mitigate the effects of “exploding” gradients, we clipped the ℓ2 norm of the gradient to 5 before applying the weight update rule (Sutskever et al., 2014; Graves, 2013). An ℓ2 penalty of 1 × 10−6 was applied to all weights. Matrix and vector parameters were initialized with uniform samples in ± p 6/(r + c), where r and c were the number of rows and columns in the structure (Glorot and Bengio, 2010). Dimensionality. The full version of our parsing model sets dimensionalities as follows. LSTM hidden states are of size 100, and we use two layers of LSTMs for each stack. Embeddings of the parser actions used in the composition functions have 16 dimensions, and the output embedding size is 20 dimensions. Pretained word embeddings have 100 dimensions (English) and 80 dimensions (Chinese), and the learned word embeddings have 6Software for replicating the experiments is available from https://github.com/clab/lstm-parser. 32 dimensions. Part of speech embeddings have 12 dimensions. These dimensions were chosen based on intuitively reasonable values (words should have higher dimensionality than parsing actions, POS tags, and relations; LSTM states should be relatively large), and it was confirmed on development data that they performed well.7 Future work might more carefully optimize these parameters; our reported architecture strikes a balance between minimizing computational expense and finding solutions that work. 5 Experiments We applied our parsing model and several variations of it to two parsing tasks and report results below. 5.1 Data We used the same data setup as Chen and Manning (2014), namely an English and a Chinese parsing task. This baseline configuration was chosen since they likewise used a neural parameterization to predict actions in an arc-standard transition-based parser. • For English, we used the Stanford Dependencency (SD) treebank (de Marneffe et al., 2006) used in (Chen and Manning, 2014) which is the closest model published, with the same splits.8 The part-of-speech tags are predicted by using the Stanford Tagger (Toutanova et al., 2003) with an accuracy of 97.3%. This treebank contains a negligible amount of non-projective arcs (Chen and Manning, 2014). • For Chinese, we use the Penn Chinese Treebank 5.1 (CTB5) following Zhang and Clark (2008),9 with gold part-of-speech tags which is also the same as in Chen and Manning (2014). Language model word embeddings were generated, for English, from the AFP portion of the English Gigaword corpus (version 5), and from the complete Chinese Gigaword corpus (version 2), 7We did perform preliminary experiments with LSTM states of 32, 50, and 80, but the other dimensions were our initial guesses. 8Training: 02-21. Development: 22. Test: 23. 9Training: 001–815, 1001–1136. Development: 886– 931, 1148–1151. Test: 816–885, 1137–1147. 339 as segmented by the Stanford Chinese Segmenter (Tseng et al., 2005). 5.2 Experimental configurations We report results on five experimental configurations per language, as well as the Chen and Manning (2014) baseline. These are: the full stack LSTM parsing model (S-LSTM), the stack LSTM parsing model without POS tags (−POS), the stack LSTM parsing model without pretrained language model embeddings (−pretraining), the stack LSTM parsing model that uses just head words on the stack instead of composed representations (−composition), and the full parsing model where rather than an LSTM, a classical recurrent neural network is used (S-RNN). 5.3 Results Following Chen and Manning (2014) we exclude punctuation symbols for evaluation. Tables 1 and 2 show comparable results with Chen and Manning (2014), and we show that our model is better than their model in both the development set and the test set. Development Test UAS LAS UAS LAS S-LSTM 93.2 90.9 93.1 90.9 −POS 93.1 90.4 92.7 90.3 −pretraining 92.7 90.4 92.4 90.0 −composition 92.7 89.9 92.2 89.6 S-RNN 92.8 90.4 92.3 90.1 C&M (2014) 92.2 89.7 91.8 89.6 Table 1: English parsing results (SD) Dev. set Test set UAS LAS UAS LAS S-LSTM 87.2 85.9 87.2 85.7 −composition 85.8 84.0 85.3 83.6 −pretraining 86.3 84.7 85.7 84.1 −POS 82.8 79.8 82.2 79.1 S-RNN 86.3 84.7 86.1 84.6 C&M (2014) 84.0 82.4 83.9 82.4 Table 2: Chinese parsing results (CTB5) 5.4 Analysis Overall, our parser substantially outperforms the baseline neural network parser of Chen and Manning (2014), both in the full configuration and in the various ablated conditions we report. The one exception to this is the −POS condition for the Chinese parsing task, which in which we underperform their baseline (which used gold POS tags), although we do still obtain reasonable parsing performance in this limited case. We note that predicted POS tags in English add very little value—suggesting that we can think of parsing sentences directly without first tagging them. We also find that using composed representations of dependency tree fragments outperforms using representations of head words alone, which has implications for theories of headedness. Finally, we find that while LSTMs outperform baselines that use only classical RNNs, these are still quite capable of learning good representations. Effect of beam size. Beam search was determined to have minimal impact on scores (absolute improvements of ≤0.3% were possible with small beams). Therefore, all results we report used greedy decoding—Chen and Manning (2014) likewise only report results with greedy decoding. This finding is in line with previous work that generates sequences from recurrent networks (Grefenstette et al., 2014), although Vinyals et al. (2015) did report much more substantial improvements with beam search on their “grammar as a foreign language” parser.10 6 Related Work Our approach ties together several strands of previous work. First, several kinds of stack memories have been proposed to augment neural architectures. Das et al. (1992) proposed a neural network with an external stack memory based on recurrent neural networks. In contrast to our model, in which the entire contents of the stack are summarized in a single value, in their model, the network could only see the contents of the top of the stack. Mikkulainen (1996) proposed an architecture with a stack that had a summary feature, although the stack control was learned as a latent variable. A variety of authors have used neural networks to predict parser actions in shift-reduce parsers. The earliest attempt we are aware of is due to Mayberry and Miikkulainen (1999). The resurgence of interest in neural networks has resulted 10Although superficially similar to ours, Vinyals et al. (2015) is a phrase-structure parser and adaptation to the dependency parsing scenario would have been nontrivial. We discuss their work in §6. 340 in in several applications to transition-based dependency parsers (Weiss et al., 2015; Chen and Manning, 2014; Stenetorp, 2013). In these works, the conditioning structure was manually crafted and sensitive to only certain properties of the state, while we are conditioning on the global state object. Like us, Stenetorp (2013) used recursively composed representations of the tree fragments (a head and its dependents). Neural networks have also been used to learn representations for use in chart parsing (Henderson, 2004; Titov and Henderson, 2007; Socher et al., 2013a; Le and Zuidema, 2014). LSTMs have also recently been demonstrated as a mechanism for learning to represent parse structure.Vinyals et al. (2015) proposed a phrasestructure parser based on LSTMs which operated by first reading the entire input sentence in so as to obtain a vector representation of it, and then generating bracketing structures sequentially conditioned on this representation. Although superficially similar to our model, their approach has a number of disadvantages. First, they relied on a large amount of semi-supervised training data that was generated by parsing a large unannotated corpus with an off-the-shelf parser. Second, while they recognized that a stack-like shiftreduce parser control provided useful information, they only made the top word of the stack visible during training and decoding. Third, although it is impressive feat of learning that an entire parse tree be represented by a vector, it seems that this formulation makes the problem unnecessarily difficult. Finally, our work can be understood as a progression toward using larger contexts in parsing. An exhaustive summary is beyond the scope of this paper, but some of the important milestones in this tradition are the use of cube pruning to efficiently include nonlocal features in discriminative chart reranking (Huang and Chiang, 2008), approximate decoding techniques based on LP relaxations in graph-based parsing to include higherorder features (Martins et al., 2010), and randomized hill-climbing methods that enable arbitrary nonlocal features in global discriminative parsing models (Zhang et al., 2014). Since our parser is sensitive to any part of the input, its history, or its stack contents, it is similar in spirit to the last approach, which permits truly arbitrary features. 7 Conclusion We presented stack LSTMs, recurrent neural networks for sequences, with push and pop operations, and used them to implement a state-of-theart transition-based dependency parser. We conclude by remarking that stack memory offers intriguing possibilities for learning to solve general information processing problems (Mikkulainen, 1996). Here, we learned from observable stack manipulation operations (i.e., supervision from a treebank), and the computed embeddings of final parser states were not used for any further prediction. However, this could be reversed, giving a device that learns to construct context-free programs (e.g., expression trees) given only observed outputs; one application would be unsupervised parsing. Such an extension of the work would make it an alternative to architectures that have an explicit external memory such as neural Turing machines (Graves et al., 2014) and memory networks (Weston et al., 2015). However, as with those models, without supervision of the stack operations, formidable computational challenges must be solved (e.g., marginalizing over all latent stack operations), but sampling techniques and techniques from reinforcement learning have promise here (Zaremba and Sutskever, 2015), making this an intriguing avenue for future work. Acknowledgments The authors would like to thank Lingpeng Kong and Jacob Eisenstein for comments on an earlier version of this draft and Danqi Chen for assistance with the parsing datasets. This work was sponsored in part by the U. S. Army Research Laboratory and the U. S. Army Research Office under contract/grant number W911NF-10-1-0533, and in part by NSF CAREER grant IIS-1054319. Miguel Ballesteros is supported by the European Commission under the contract numbers FP7-ICT610411 (project MULTISENSOR) and H2020RIA-645012 (project KRISTINA). References Miguel Ballesteros and Bernd Bohnet. 2014. Automatic feature selection for agenda-based dependency parsing. In Proc. COLING. Miguel Ballesteros and Joakim Nivre. 2014. MaltOptimizer: Fast and effective parser optimization. Natural Language Engineering. 341 Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2014. Tailoring continuous word representations for dependency parsing. In Proc. ACL. Bernd Bohnet and Joakim Nivre. 2012. A transitionbased system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proc. EMNLP. Danqi Chen and Christopher D. Manning. 2014. A fast and accurate dependency parser using neural networks. In Proc. EMNLP. Wenliang Chen, Yue Zhang, and Min Zhang. 2014. Feature embedding for dependency parsing. In Proc. COLING. Jinho D. Choi and Andrew McCallum. 2013. Transition-based dependency parsing with selectional branching. In Proc. ACL. Sreerupa Das, C. Lee Giles, and Guo-Zheng Sun. 1992. Learning context-free grammars: Capabilities and limitations of a recurrent neural network with an external stack memory. In Proc. Cognitive Science Society. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proc. LREC. Xavier Glorot and Yoshua Bengio. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proc. ICML. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Deep sparse rectifier neural networks. In Proc. AISTATS. Yoav Goldberg, Kai Zhao, and Liang Huang. 2013. Efficient implementation of beam-search incremental parsers. In Proc. ACL. Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM networks. In Proc. IJCNN. Alex Graves, Santiago Fern´andez, and J¨urgen Schmidhuber. 2007. Multi-dimensional recurrent neural networks. In Proc. ICANN. Alex Graves, Greg Wayne, and Ivo Danihelka. 2014. Neural Turing machines. CoRR, abs/1410.5401. Alex Graves. 2013. Generating sequences with recurrent neural networks. CoRR, abs/1308.0850. Edward Grefenstette, Karl Moritz Hermann, Georgiana Dinu, and Phil Blunsom. 2014. New directions in vector space models of meaning. ACL Tutorial. James Henderson. 2004. Discriminative training of a neural network discriminative parser. In Proc. ACL. Karl Moritz Hermann and Phil Blunsom. 2013. The role of syntax in vector space models of compositional semantics. In Proc. ACL. Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735–1780. Liang Huang and David Chiang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proc. ACL. Phong Le and Willem Zuidema. 2014. Insideoutside recursive neural network model for dependency parsing. In Proc. EMNLP. Wang Ling, Chris Dyer, Alan Black, and Isabel Trancoso. 2015. Two/too simple adaptations of word2vec for syntax problems. In Proc. NAACL. Andr´e F. T. Martins, Noah A. Smith, Eric P. Xing, Pedro M. Q. Aguiar, and M´ario A. T. Figueiredo. 2010. Turboparsers: Dependency parsing by approximate variational inference. In Proc. EMNLP. Marshall R. Mayberry and Risto Miikkulainen. 1999. SARDSRN: A neural network shift-reduce parser. In Proc. IJCAI. Risto Mikkulainen. 1996. Subsymbolic case-role analysis of sentences with embedded clauses. Cognitive Science, 20:47–73. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Proc. NIPS. Joakim Nivre. 2003. An efficient algorithm for projective dependency parsing. In Proc. IWPT. Joakim Nivre. 2004. Incrementality in deterministic dependency parsing. In Proceedings of the Workshop on Incremental Parsing: Bringing Engineering and Cognition Together. Joakim Nivre. 2007. Incremental non-projective dependency parsing. In Proc. NAACL. Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34:4:513–553. MIT Press. Joakim Nivre. 2009. Non-projective dependency parsing in expected linear time. In Proc. ACL. Razvan Pascanu, C¸ aglar G¨ulc¸ehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. In Proc. ICLR. Francesco Sartorio, Giorgio Satta, and Joakim Nivre. 2013. A transition-based dependency parser using a dynamic parsing strategy. In Proc. ACL. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Proc. NIPS. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with compositional vector grammars. In Proc. ACL. 342 Richard Socher, Andrej Karpathy, Quoc V. Le, Christopher D. Manning, and Andrew Y. Ng. 2013b. Grounded compositional semantics for finding and describing images with sentences. TACL. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2013c. Recursive deep models for semantic compositionality over a sentiment treebank. In Proc. EMNLP. Pontus Stenetorp. 2013. Transition-based dependency parsing using recursive neural networks. In Proc. NIPS Deep Learning Workshop. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proc. NIPS. Ivan Titov and James Henderson. 2007. Constituent parsing with incremental sigmoid belief networks. In Proc. ACL. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proc. NAACL. Huihsin Tseng, Pichuan Chang, Galen Andrew, Daniel Jurafsky, and Christopher Manning. 2005. A conditional random field word segmenter for SIGHAN bakeoff 2005. In Proc. Fourth SIGHAN Workshop on Chinese Language Processing. Oriol Vinyals, Lukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In Proc. ICLR. David Weiss, Christopher Alberti, Michael Collins, and Slav Petrov. 2015. Structured training for neural network transition-based parsing. In Proc. ACL. Jason Weston, Sumit Chopra, and Antoine Bordes. 2015. Memory networks. In Proc. ICLR. Hiroyasu Yamada and Yuji Matsumoto. 2003. Statistical dependency analysis with support vector machines. In Proc. IWPT. Wojciech Zaremba and Ilya Sutskever. 2015. Reinforcement learning neural Turing machines. ArXiv e-prints, May. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: Investigating and combining graph-based and transition-based dependency parsing. In Proc. EMNLP. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proc. ACL. Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proc. EMNLP. 343
2015
33
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 344–354, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Leveraging Linguistic Structure For Open Domain Information Extraction Gabor Angeli Melvin Johnson Premkumar Department of Computer Science Stanford University {angeli, melvinj, manning}@cs.stanford.edu Christopher D. Manning Abstract Relation triples produced by open domain information extraction (open IE) systems are useful for question answering, inference, and other IE tasks. Traditionally these are extracted using a large set of patterns; however, this approach is brittle on out-of-domain text and long-range dependencies, and gives no insight into the substructure of the arguments. We replace this large pattern set with a few patterns for canonically structured sentences, and shift the focus to a classifier which learns to extract self-contained clauses from longer sentences. We then run natural logic inference over these short clauses to determine the maximally specific arguments for each candidate triple. We show that our approach outperforms a state-of-the-art open IE system on the end-to-end TAC-KBP 2013 Slot Filling task. 1 Introduction Open information extraction (open IE) has been shown to be useful in a number of NLP tasks, such as question answering (Fader et al., 2014), relation extraction (Soderland et al., 2010), and information retrieval (Etzioni, 2011). Conventionally, open IE systems search a collection of patterns over either the surface form or dependency tree of a sentence. Although a small set of patterns covers most simple sentences (e.g., subject verb object constructions), relevant relations are often spread across clauses (see Figure 1) or presented in a non-canonical form. Systems like Ollie (Mausam et al., 2012) approach this problem by using a bootstrapping method to create a large corpus of broad-coverage partially lexicalized patterns. Although this is effective at capturing many of these patterns, it Born in Honolulu, Hawaii, Obama is a US Citizen. Our System Ollie (Obama; is; US citizen) (Obama; is; a US citizen) (Obama; born in; (Obama; be born in; Honolulu) Honolulu, Hawaii) (Honolulu; be born in; Hawaii) (Obama; is citizen of; US) Friends give true praise. Enemies give fake praise. Our System Ollie (friends; give; true praise) (friends; give; true praise) (friends; give; praise) (enemies; give; fake praise) (enemies; give; fake praise) Heinz Fischer of Austria visits the US Our System Ollie (Heinz Fischer; visits; US) (Heinz Fischer of Austria; visits; the US) Figure 1: Open IE extractions produced by the system, alongside extractions from the stateof-the-art Ollie system. Generating coherent clauses before applying patterns helps reduce false matches such as (Honolulu; be born in; Hawaii). Inference over the sub-structure of arguments, in turn, allows us to drop unnecessary information (e.g., of Austria), but only when it is warranted (e.g., keep fake in fake praise). can lead to unintuitive behavior on out-of-domain text. For instance, while Obama is president is extracted correctly by Ollie as (Obama; is; president), replacing is with are in cats are felines produces no extractions. Furthermore, existing systems struggle at producing canonical argument forms – for example, in Figure 1 the argument Heinz Fischer of Austria is likely less useful for downstream applications than Heinz Fischer. In this paper, we shift the burden of extracting informative and broad coverage triples away from this large pattern set. Rather, we first pre-process the sentence in linguistically motivated ways to produce coherent clauses which are (1) logically 344 entailed by the original sentence, and (2) easy to segment into open IE triples. Our approach consists of two stages: we first learn a classifier for splitting a sentence into shorter utterances (Section 3), and then appeal to natural logic (S´anchez Valencia, 1991) to maximally shorten these utterances while maintaining necessary context (Section 4.1). A small set of 14 hand-crafted patterns can then be used to segment an utterance into an open IE triple. We treat the first stage as a greedy search problem: we traverse a dependency parse tree recursively, at each step predicting whether an edge should yield an independent clause. Importantly, in many cases na¨ıvely yielding a clause on a dependency edge produces an incomplete utterance (e.g., Born in Honolulu, Hawaii, from Figure 1). These are often attributable to control relationships, where either the subject or object of the governing clause controls the subject of the subordinate clause. We therefore allow the produced clause to sometimes inherit the subject or object of its governor. This allows us to capture a large variety of long range dependencies with a concise classifier. From these independent clauses, we then extract shorter sentences, which will produce shorter arguments more likely to be useful for downstream applications. A natural framework for solving this problem is natural logic – a proof system built on the syntax of human language (see Section 4.1). We can then observe that Heinz Fischer of Austria visits China entails that Heinz Fischer visits China. On the other hand, we respect situations where it is incorrect to shorten an argument. For example, No house cats have rabies should not entail that cats have rabies, or even that house cats have rabies. When careful attention to logical validity is necessary – such as textual entailment – this approach captures even more subtle phenomena. For example, whereas all rabbits eat fresh vegetables yields (rabbits; eat; vegetables), the apparently similar sentence all young rabbits drink milk does not yield (rabbits; drink; milk). We show that our new system performs well on a real world evaluation – the TAC KBP Slot Filling challenge (Surdeanu, 2013). We outperform both an official submission on open IE, and a baseline of replacing our extractor with Ollie, a state-ofthe-art open IE systems. 2 Related Work There is a large body of work on open information extraction. One line of work begins with TextRunner (Yates et al., 2007) and ReVerb (Fader et al., 2011), which make use of computationally efficient surface patterns over tokens. With the introduction of fast dependency parsers, Ollie (Mausam et al., 2012) continues in the same spirit but with learned dependency patterns, improving on the earlier WOE system (Wu and Weld, 2010). The Never Ending Language Learning project (Carlson et al., 2010) has a similar aim, iteratively learning more facts from the internet from a seed set of examples. Exemplar (Mesquita et al., 2013) adapts the open IE framework to nary relationships similar to semantic role labeling, but without the expensive machinery. Open IE triples have been used in a number of applications – for example, learning entailment graphs for new triples (Berant et al., 2011), and matrix factorization for unifying open IE and structured relations (Yao et al., 2012; Riedel et al., 2013). In each of these cases, the concise extractions provided by open IE allow for efficient symbolic methods for entailment, such as Markov logic networks or matrix factorization. Prior work on the KBP challenge can be categorized into a number of approaches. The most common of these are distantly supervised relation extractors (Craven and Kumlien, 1999; Wu and Weld, 2007; Mintz et al., 2009; Sun et al., 2011), and rule based systems (Soderland, 1997; Grishman and Min, 2010; Chen et al., 2010). However, both of these approaches require careful tuning to the task, and need to be trained explicitly on the KBP relation schema. Soderland et al. (2013) submitted a system to KBP making use of open IE relations and an easily constructed mapping to KBP relations; we use this as a baseline for our empirical evaluation. Prior work has used natural logic for RTE-style textual entailment, as a formalism well-suited for formal semantics in neural networks, and as a framework for common-sense reasoning (MacCartney and Manning, 2009; Watanabe et al., 2012; Bowman et al., 2014; Angeli and Manning, 2013). We adopt the precise semantics of Icard and Moss (2014). Our approach of finding short entailments from a longer utterance is similar in spirit to work on textual entailment for information extraction (Romano et al., 2006). 345 Born in a small town, she took the midnight train going anywhere. prep in amod det vmod nsubj dobj nn det vmod dobj she Born in a small town prep in amod det nsubj (input) (extracted clause) ↓ ↓ she took the midnight train going anywhere she took the midnight train Born in a small town, she took the midnight train she took midnight train Born in a town, she took the midnight train . . . she Born in small town she Born in a town she Born in town ↓ ↓ (she; took; midnight train) (she; born in; small town) (she; born in; town) Figure 2: An illustration of our approach. From left to right, a sentence yields a number of independent clauses (e.g., she Born in a small town – see Section 3). From top to bottom, each clause produces a set of entailed shorter utterances, and segments the ones which match an atomic pattern into a relation triple (see Section 4.1). 3 Inter-Clause Open IE In the first stage of our method, we produce a set of self-contained clauses from a longer utterance. Our objective is to produce a set of clauses which can stand on their own syntactically and semantically, and are entailed by the original sentence (see Figure 2). Note that this task is not specific to extracting open IE triples. Conventional relation extractors, entailment systems, and other NLP applications may also benefit from such a system. We frame this task as a search problem. At a given node in the parse tree, we classify each outgoing arc e = p l−→c, from the governor p to a dependent c with [collapsed] Stanford Dependency label l, into an action to perform on that arc. Once we have chosen an action to take on that arc, we can recurse on the dependent node. We decompose the action into two parts: (1) the action to take on the outgoing edge e, and (2) the action to take on the governor p. For example, in our motivating example, we are considering the arc: e = took vmod −−−→born. In this case, the correct action is to (1) yield a new clause rooted at born, and (2) interpret the subject of born as the subject of took. We proceed to describe this action space in more detail, followed by an explanation of our training data, and finally our classifier. 3.1 Action Space The three actions we can perform on a dependency edge are: Yield Yields a new clause on this dependency arc. A canonical case of this action is the arc suggest ccomp −−−−→brush in Dentists suggest that you should brush your teeth, yielding you should brush your teeth. Recurse Recurse on this dependency arc, but do not yield it as a new clause. For example, in the sentence faeries are dancing in the field where I lost my bike, we must recurse through the intermediate constituent the field where I lost my bike – which itself is not relevant – to get to the clause of interest: I lost my bike. Stop Do not recurse on this arc, as the subtree under this arc is not entailed by the parent sentence. This is the case, for example, for most leaf nodes (furry cats are cute should not entail the clause furry), and is an important action for the efficiency of the algorithm. With these three actions, a search path through the tree becomes a sequence of Recurse and Yield actions, terminated by a Stop action (or leaf node). For example, a search sequence A Recurse −−−−−→ B Y ield −−−→C Stop −−−→D would yield a clause rooted at C. A sequence A Y ield −−−→B Y ield −−−→C Stop −−−→D would yield clauses rooted at both B and C. Finding all such sequences is in general exponential in the size of the tree. In practice, during training we run breadth first search to collect the first 10 000 sequences. During inference we run uniform cost search until our classifier predictions fall below a 346 given threshold. For the Stop action, we do not need to further specify an action to take on the parent node. However, for both of the other actions, it is often the case that we would like to capture a controller in the higher clause. We define three such common actions: Subject Controller If the arc we are considering is not already a subject arc, we can copy the subject of the parent node and attach it as a subject of the child node. This is the action taken in the example Born in a small town, she took the midnight train. Object Controller Analogous to the subject controller action above, but taking the object instead. This is the intended action for examples like I persuaded Fred to leave the room.1 Parent Subject If the arc we are taking is the only outgoing arc from a node, we take the parent node as the (passive) subject of the child. This is the action taken in the example Obama, our 44th president to yield a clause with the semantics of Obama [is] our 44th president. Although additional actions are easy to imagine, we found empirically that these cover a wide range of applicable cases. We turn our attention to the training data for learning these actions. 3.2 Training We collect a noisy dataset to train our clause generation model. We leverage the distant supervision assumption for relation extraction, which creates a noisy corpus of sentences annotated with relation mentions (subject and object spans in the sentence with a known relation). Then, we take this annotation as itself distant supervision for a correct sequence of actions to take: any sequence which recovers the known relation is correct. We use a small subset of the KBP source documents for 2010 (Ji et al., 2010) and 2013 (Surdeanu, 2013) as our distantly supervised corpus. To try to maximize the density of known relations in the training sentences, we take all sentences which have at least one known relation for every 10 tokens in the sentence, resulting in 43 155 sentences. In addition, we incorporate the 23 725 manually annotated examples from Angeli et al. (2014). 1The system currently misses most most such cases due to insufficient support in the training data. Once we are given a collection of labeled sentences, we assume that a sequence of actions which leads to a correct extraction of a known relation is a positive sequence. A correct extraction is any extraction we produce from our model (see Section 4) which has the same arguments as the known relation. For instance, if we know that Obama was born in Hawaii from the sentence Born in Hawaii, Obama . . . , and an action sequence produces the triple (Obama, born in, Hawaii), then we take that action sequence as a positive sequence. Any sequence of actions which results in a clause which produces no relations is in turn considered a negative sequence. The third case to consider is a sequence of actions which produces a relation, but it is not one of the annotated relations. This arises from the incomplete negatives problem in distantly supervised relation extraction (Min et al., 2013): since our knowledge base is not exhaustive, we cannot be sure if an extracted relation is incorrect or correct but previously unknown. Although many of these unmatched relations are indeed incorrect, the dataset is sufficiently biased towards the STOP action that the occasional false negative hurts end-to-end performance. Therefore, we simply discard such sequences. Given a set of noisy positive and negative sequences, we construct training data for our action classifier. All but the last action in a positive sequence are added to the training set with the label Recurse; the last action is added with the label Split. Only the last action in a negative sequence is added with the label Stop. We partition the feature space of our dataset according to the action applied to the parent node. 3.3 Inference We train a multinomial logistic regression classifier on our noisy training data, using the features in Table 1. The most salient features are the label of the edge being taken, the incoming edge to the parent of the edge being taken, neighboring edges for both the parent and child of the edge, and the part of speech tag of the endpoints of the edge. The dataset is weighted to give 3× weight to examples in the Recurse class, as precision errors in this class are relatively harmless for accuracy, while recall errors are directly harmful to recall. Inference now reduces to a search problem. Be347 Feature Class Feature Templates Edge taken {l, short name(l)} Last edge taken {incoming edge(p)} Neighbors of parent {nbr(p), (p, nbr(p))} Grandchild edges {out edge(c), (e, out edge(c))} Grandchild count {count (nbr(echild)) (e, count (nbr(echild)))} Has subject/object ∀e∈{e,echild}∀l∈{subj,obj} 1(l ∈nbr(e)) POS tag signature {pos(p), pos(c), (pos(p), pos(c))} Features at root {1(p = root), POS(p)} Table 1: Features for the clause splitter model, deciding to split on the arc e = p l−→c. The feature class is a high level description of features; the feature templates are the particular templates used. For instance, the POS signature contains the tag of the parent, the tag of the child, and both tags joined in a single feature. Note that all features are joined with the action to be taken on the parent. ginning at the root of the tree, we consider every outgoing edge. For every possible action to be performed on the parent (i.e., clone subject, clone root, no action), we apply our trained classifier to determine whether we (1) split the edge off as a clause, and recurse; (2) do not split the edge, and recurse; or (3) do not recurse. In the first two cases, we recurse on the child of the arc, and continue until either all arcs have been exhausted, or all remaining candidate arcs have been marked as not recursable. We will use the scores from this classifier to inform the score assigned to our generated open IE extractions (Section 4). The score of a clause is the product of the scores of actions taken to reach the clause. The score of an extraction will be this score multiplied by the score of the extraction given the clause. 4 Intra-Clause Open IE We now turn to the task of generating a maximally compact sentence which retains the core semantics of the original utterance, and parsing the sentence into a conventional open IE subject verb object triple. This is often a key component in downstream applications, where extractions need to be not only correct, but also informative. Whereas an argument like Heinz Fischer of Austria is often correct, a downstream application must apply further processing to recover information about either Heinz Fischer, or Austria. Moreover, it must do so without the ability to appeal to the larger context of the sentence. 4.1 Validating Deletions with Natural Logic We adopt a subset of natural logic semantics dictating contexts in which lexical items can be removed. Natural logic as a formalism captures common logical inferences appealing directly to the form of language, rather than parsing to a specialized logical syntax. It provides a proof theory for lexical mutations to a sentence which either preserve or negate the truth of the premise. For instance, if all rabbits eat vegetables then all cute rabbits eat vegetables, since we are allowed to mutate the lexical item rabbit to cute rabbit. This is done by observing that rabbit is in scope of the first argument to the operator all. Since all induces a downward polarity environment for its first argument, we are allowed to replace rabbit with an item which is more specific – in this case cute rabbit. To contrast, the operator some induces an upward polarity environment for its first argument, and therefore we may derive the inference from cute rabbit to rabbit in: some cute rabbits are small therefore some rabbits are small. For a more comprehensive introduction to natural logic, see van Benthem (2008). We mark the scopes of all operators (all, no, many, etc.) in a sentence, and from this determine whether every lexical item can be replaced by something more general (has upward polarity), more specific (downward polarity), or neither. In the absence of operators, all items have upwards polarity. Each dependency arc is then classified into whether deleting the dependent of that arc makes the governing constituent at that node more general, more specific (a rare case), or neither.2 For example, removing the amod edge in cute amod ←−−−rabbit yields the more general lexical item rabbit. However, removing the nsubj edge in Fido nsubj ←−−−runs would yield the unentailed (and nonsensical) phrase runs. The last, rare, case is an edge that causes the resulting item to be more specific – e.g., quantmod: about quantmod ←−−−−−−200 is more general than 200. 2We use the Stanford Dependencies representation (de Marneffe and Manning, 2008). 348 For most dependencies, this semantics can be hard-coded with high accuracy. However, there are at least two cases where more attention is warranted. The first of these concerns non-subsective adjectives: for example a fake gun is not a gun. For this case, we make use of the list of non-subsective adjectives collected in Nayak et al. (2014), and prohibit their deletion as a hard constraint. The second concern is with prepositional attachment, and direct object edges. For example, whereas Alice went to the playground prep with −−−−−−→ Bob entails that Alice went to the playground, it is not meaningful to infer that Alice is friends prep with −−−−−−→Bob entails Alice is friends. Analogously, Alice played dobj −−→baseball on Sunday entails that Alice played on Sunday; but, Obama signed dobj −−→the bill on Sunday should not entail the awkward phrase *Obama signed on Sunday. We learn these attachment affinities empirically from the syntactic n-grams corpus of Goldberg and Orwant (2013). This gives us counts for how often object and preposition edges occur in the context of the governing verb and relevant neighboring edges. We hypothesize that edges which are frequently seen to co-occur are likely to be essential to the meaning of the sentence. To this end, we compute the probability of seeing an arc of a given type, conditioned on the most specific context we have statistics for. These contexts, and the order we back off to more general contexts, is given in Figure 3. To compute a score s of deleting the edge from the affinity probability p collected from the syntactic n-grams, we simply cap the affinity and subtract it from 1: s = 1 −min(1, p K ) where K is a hyperparameter denoting the minimum fraction of the time an edge should occur in a context to be considered entirely unremovable. In our experiments, we set K = 1 3. The score of an extraction, then, is the product of the scores of each deletion multiplied by the score from the clause splitting step in Section 3. 4.2 Atomic Patterns Once a set of short entailed sentences is produced, it becomes straightforward to segment them into conventional open IE triples. We employ 6 simple dependency patterns, given in Table 2, which Obama signed the bill into law on Friday nsubj dobj det prep into prep on prep backoff              p  prep on | Obama signed bill nsubj dobj  p  prep on | Obama signed law nsubj prep into  p  prep on | Obama signed nsubj  p  prep on | signed  dobj backoff ( p  dobj | Obama signed bill nsubj dobj  p  dobj | signed  Figure 3: The ordered list of backoff probabilities when deciding to drop a prepositional phrase or direct object. The most specific context is chosen for which an empirical probability exists; if no context is found then we allow dropping prepositional phrases and disallow dropping direct objects. Note that this backoff arbitrarily orders contexts of the same size. Input Extraction cats play with yarn (cats; play with; yarn) fish like to swim (fish; like to; swim) cats have tails (cats; have; tails) cats are cute (cats; are; cute) Tom and Jerry are fighting (Tom; fighting; Jerry) There are cats with tails (cats; have; tails) Table 2: The six dependency patterns used to segment an atomic sentence into an open IE triple. cover the majority of atomic relations we are interested in. When information is available to disambiguate the substructure of compound nouns (e.g., named entity segmentation), we extract additional relations with 5 dependency and 3 TokensRegex (Chang and Manning, 2014) surface form patterns. These are given in Table 3; we refer to these as nominal relations. Note that the constraint of named entity information is by no means required for the system. In other applications – for example, applications in vision – the otherwise trivial nominal relations could be quite useful. 349 KBP Relation Open IE Relation PMI2 KBP Relation Open IE Relation PMI2 Org:Founded found in 1.17 Per:Date Of Birth be bear on 1.83 be found in 1.15 bear on 1.28 Org:Dissolved *buy Chrysler in 0.95 Per:Date Of Death die on 0.70 *membership in 0.60 be assassinate on 0.65 Org:LOC Of HQ in 2.12 Per:LOC Of Birth be bear in 1.21 base in 1.82 Per:LOC Of Death *elect president of 2.89 Org:Member Of *tough away game in 1.80 Per:Religion speak about 0.67 *away game in 1.80 popular for 0.60 Org:Parents ’s bank 1.65 Per:Parents daughter of 0.54 *also add to 1.52 son of 1.52 Org:Founded By invest fund of 1.48 Per:LOC Residence of 1.48 own stake besides 1.18 *independent from 1.18 Table 4: A selection of the mapping from KBP to lemmatized open IE relations, conditioned on the types of the arguments being correct. The top one or two relations are shown for 7 person and 6 organization relations. Incorrect or dubious mappings are marked with an asterisk. Input Extraction Durin, son of Thorin (Durin; is son of; Thorin) Thorin’s son, Durin (Thorin; ’s son; Durin) IBM CEO Rometty (Rometty; is CEO of; IBM) President Obama (Obama; is; President) Fischer of Austria (Fischer; is of; Austria) IBM’s research group (IBM; ’s; research group) US president Obama (Obama; president of; US) Our president, Obama, (Our president; be; Obama) Table 3: The eight patterns used to segment a noun phrase into an open IE triple. The first five are dependency patterns; the last three are surface patterns. 5 Mapping OpenIE to a Known Relation Schema A common use case for open IE systems is to map them to a known relation schema. This can either be done manually with minimal annotation effort, or automatically from available training data. We use both methods in our TAC-KBP evaluation. A collection of relation mappings was constructed by a single annotator in approximately a day,3 and a relation mapping was learned using the procedure described in this section. We map open IE relations to the KBP schema by searching for co-occurring relations in a large distantly-labeled corpus, and marking open IE and 3The official submission we compare against claimed two weeks for constructing their manual mapping, although a version of their system constructed in only 3 hours performs nearly as well. KBP relation pairs which have a high PMI2 value (B´eatrice, 1994; Evert, 2005) conditioned on their type signatures matching. To compute PMI2, we collect probabilities for the open IE and KBP relation co-occurring, the probability of the open IE relation occurring, and the probability of the KBP relation occurring. Each of these probabilities is conditioned on the type signature of the relation. For example, the joint probability of KBP relation rk and open IE relation ro, given a type signature of t1, t2, would be p(rk, ro | t1, t2) = count(rk, ro, t1, t2) P r′ k,r′o count(r′ k, r′o, t1, t2). Omitting the conditioning on the type signature for notational convenience, and defining p(rk) and p(ro) analogously, we can then compute The PMI2 value between the two relations: PMI2(rk, ro) = log  p(rk, ro)2 p(rk) · p(ro)  Note that in addition to being a measure related to PMI, this captures a notion similar to alignment by agreement (Liang et al., 2006); the formula can be equivalently written as log [p(rk | ro)p(ro | rk)]. It is also functionally the same as the JC WordNet distance measure (Jiang and Conrath, 1997). Some sample type checked relation mappings are given in Table 4. In addition to intuitive mappings (e.g., found in →Org:Founded), we can note some rare, but high precision pairs (e.g., invest fund of →Org:Founded By). We can also see 350 the noise in distant supervision occasionally permeate the mapping, e.g., with elect president of → Per:LOC Of Death – a president is likely to die in his own country. 6 Evaluation We evaluate our approach in the context of a realworld end-to-end relation extraction task – the TAC KBP Slot Filling challenge. In Slot Filling, we are given a large unlabeled corpus of text, a fixed schema of relations (see Section 5), and a set of query entities. The task is to find all relation triples in the corpus that have as a subject the query entity, and as a relation one of the defined relations. This can be viewed intuitively as populating Wikipedia Infoboxes from a large unstructured corpus of text. We compare our approach to the University of Washington submission to TAC-KBP 2013 (Soderland et al., 2013). Their system used OpenIE v4.0 (a successor to Ollie) run over the KBP corpus and then they generated a mapping from the extracted relations to the fixed schema. Unlike our system, Open IE v4.0 employs a semantic role component extracting structured SRL frames, alongside a conventional open IE system. Furthermore, the UW submission allows for extracting relations and entities from substrings of an open IE triple argument. For example, from the triple (Smith; was appointed; acting director of Acme Corporation), they extract that Smith is employed by Acme Corporation. We disallow such extractions, passing the burden of finding correct precise extractions to the open IE system itself (see Section 4). For entity linking, the UW submission uses Tom Lin’s entity linker (Lin et al., 2012); our submission uses the Illinois Wikifier (Ratinov et al., 2011) without the relational inference component, for efficiency. For coreference, UW uses the Stanford coreference system (Lee et al., 2011); we employ a variant of the simple coref system described in (Pink et al., 2014). We report our results in Table 5.4 UW Official refers to the official submission in the 2013 challenge; we show a 3.1 F1 improvement (to 22.7 4All results are reported with the anydoc flag set to true in the evaluation script, meaning that only the truth of the extracted knowledge base entry and not the associated provenance is scored. In absence of human evaluators, this is in order to not penalize our system unfairly for extracting a new correct provenance. System P R F1 UW Official∗ 69.8 11.4 19.6 Ollie† 57.4 4.8 8.9 + Nominal Rels∗ 57.7 11.8 19.6 Our System - Nominal Rels† 64.3 8.6 15.2 + Nominal Rels∗ 61.9 13.9 22.7 + Alt. Name 57.8 17.8 27.1 + Alt. Name + Website 58.6 18.6 28.3 Table 5: A summary of our results on the endto-end KBP Slot Filling task. UW official is the submission made to the 2013 challenge. The second row is the accuracy of Ollie embedded in our framework, and of Ollie evaluated with nominal relations from our system. Lastly, we report our system, our system with nominal relations removed, and our system combined with an alternate names detector and rule-based website detector. Comparable systems are marked with a dagger† or asterisk∗. F1) over this submission, evaluated using a comparable approach. A common technique in KBP systems but not employed by the official UW submission in 2013 is to add alternate names based on entity linking and coreference. Additionally, websites are often extracted using heuristic namematching as they are hard to capture with traditional relation extraction techniques. If we make use of both of these, our end-to-end accuracy becomes 28.2 F1. We attempt to remove the variance in scores from the influence of other components in an endto-end KBP system. We ran the Ollie open IE system (Mausam et al., 2012) in an identical framework to ours, and report accuracy in Table 5. Note that when an argument to an Ollie extraction contains a named entity, we take the argument to be that named entity. The low performance of this system can be partially attributed to its inability to extract nominal relations. To normalize for this, we report results when the Ollie extractions are supplemented with the nominal relations produced by our system (Ollie + Nominal Rels in Table 5). Conversely, we can remove the nominal relation extractions from our system; in both cases we outperform Ollie on the task. 351 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 Recall 0.0 0.2 0.4 0.6 0.8 1.0 Precision Ollie Our System (without nominals) Figure 4: A precision/recall curve for Ollie and our system (without nominals). For clarity, recall is plotted on a range from 0 to 0.15. 6.1 Discussion We plot a precision/recall curve of our extractions in Figure 4 in order to get an informal sense of the calibration of our confidence estimates. Since confidences only apply to standard extractions, we plot the curves without including any of the nominal relations. The confidence of a KBP extraction in our system is calculated as the sum of the confidences of the open IE extractions that support it. So, for instance, if we find (Obama; be bear in; Hawaii) n times with confidences c1 . . . cn, the confidence of the KBP extraction would be Pn i=0 ci. It is therefore important to note that the curve in Figure 4 necessarily conflates the confidences of individual extractions, and the frequency of an extraction. With this in mind, the curves lend some interesting insights. Although our system is very high precision on the most confident extractions, it has a large dip in precision early in the curve. This suggests that the model is extracting multiple instances of a bad relation. Systematic errors in the clause splitter are the likely cause of these errors. While the approach of splitting sentences into clauses generalizes better to out-of-domain text, it is reasonable that the errors made in the clause splitter manifest across a range of sentences more often than the fine-grained patterns of Ollie would. On the right half of the PR curve, however, our system achieves both higher precision and extends to a higher recall than Ollie. Furthermore, the curve is relatively smooth near the tail, suggesting that indeed we are learning a reasonable estimate of confidence for extractions that have only one supporting instance in the text – empirically, 46% of our extractions. In total, we extract 42 662 862 open IE triples which link to a pair of entities in the corpus (i.e., are candidate KBP extractions), covering 1 180 770 relation types. 202 797 of these relation types appear in more than 10 extraction instances; 28 782 in more than 100 instances, and 4079 in more than 1000 instances. 308 293 relation types appear only once. Note that our system over-produces extractions when both a general and specific extraction are warranted; therefore these numbers are an overestimate of the number of semantically meaningful facts. For comparison, Ollie extracted 12 274 319 triples, covering 2 873 239 relation types. 1 983 300 of these appeared only once; 69 010 appeared in more than 10 instances, 7951 in more than 100 instances, and 870 in more than 1000 instances. 7 Conclusion We have presented a system for extracting open domain relation triples by breaking a long sentence into short, coherent clauses, and then finding the maximally simple relation triples which are warranted given each of these clauses. This allows the system to have a greater awareness of the context of each extraction, and to provide informative triples to downstream applications. We show that our approach performs well on one such downstream application: the KBP Slot Filling task. Acknowledgments We thank the anonymous reviewers for their thoughtful feedback. Stanford University gratefully acknowledges the support of a Natural Language Understanding-focused gift from Google Inc. and the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA875013-2-0040. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the authors and do not necessarily reflect the view of the DARPA, AFRL, or the US government. 352 References Gabor Angeli and Christopher D. Manning. 2013. Philosophers are mortal: Inferring the truth of unseen facts. In CoNLL. Gabor Angeli, Julie Tibshirani, Jean Y. Wu, and Christopher D. Manning. 2014. Combining distant and partial supervision for relation extraction. In EMNLP. DAILLE B´eatrice. 1994. Approche mixte pour l’extraction automatique de terminologie: statistique lexicale et filtres linguistiques. Ph.D. thesis, Th`ese de Doctorat. Universit´e de Paris VII. Jonathan Berant, Ido Dagan, and Jacob Goldberger. 2011. Global learning of typed entailment rules. In Proceedings of ACL, Portland, OR. Samuel R. Bowman, Christopher Potts, and Christopher D. Manning. 2014. Recursive neural networks can learn logical semantics. CoRR, (arXiv:1406.1827). Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R Hruschka Jr, and Tom M Mitchell. 2010. Toward an architecture for neverending language learning. In AAAI. Angel X. Chang and Christopher D. Manning. 2014. TokensRegex: Defining cascaded regular expressions over tokens. Technical Report CSTR 2014-02, Department of Computer Science, Stanford University. Zheng Chen, Suzanne Tamang, Adam Lee, Xiang Li, Wen-Pin Lin, Matthew Snover, Javier Artiles, Marissa Passantino, and Heng Ji. 2010. CUNYBLENDER. In TAC-KBP. Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In AAAI. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Coling 2008: Proceedings of the workshop on Cross-Framework and Cross-Domain Parser Evaluation. Oren Etzioni. 2011. Search needs a shake-up. Nature, 476(7358):25–26. Stefan Evert. 2005. The statistics of word cooccurrences: word pairs and collocations. Ph.D. thesis, Universit at Stuttgart. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011. Identifying relations for open information extraction. In EMNLP. Anthony Fader, Luke Zettlemoyer, and Oren Etzioni. 2014. Open question answering over curated and extracted knowledge bases. In KDD. Yoav Goldberg and Jon Orwant. 2013. A dataset of syntactic-ngrams over time from a very large corpus of english books. In *SEM. Ralph Grishman and Bonan Min. 2010. New York University KBP 2010 slot-filling system. In Proc. TAC 2010 Workshop. Thomas Icard, III and Lawrence Moss. 2014. Recent progress on monotonicity. Linguistic Issues in Language Technology. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Third Text Analysis Conference. Jay J Jiang and David W Conrath. 1997. Semantic similarity based on corpus statistics and lexical taxonomy. Proceedings of the 10th International Conference on Research on Computational Linguistics. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s multi-pass sieve coreference resolution system at the conll-2011 shared task. In CoNLL Shared Task. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In NAACL-HLT. Thomas Lin, Mausam, and Oren Etzioni. 2012. No noun phrase left behind: detecting and typing unlinkable entities. In EMNLP-CoNLL. Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computational semantics. Mausam, Michael Schmitz, Robert Bart, Stephen Soderland, and Oren Etzioni. 2012. Open language learning for information extraction. In EMNLP. Filipe Mesquita, Jordan Schmidek, and Denilson Barbosa. 2013. Effectiveness and efficiency of open relation extraction. In EMNLP. Bonan Min, Ralph Grishman, Li Wan, Chang Wang, and David Gondek. 2013. Distant supervision for relation extraction with an incomplete knowledge base. In NAACL-HLT. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In ACL. Neha Nayak, Mark Kowarsky, Gabor Angeli, and Christopher D. Manning. 2014. A dictionary of nonsubsective adjectives. Technical Report CSTR 2014-04, Department of Computer Science, Stanford University, October. Glen Pink, Joel Nothman, and R. James Curran. 2014. Analysing recall loss in named entity slot filling. In EMNLP. 353 Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In ACL. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In NAACL-HLT. Lorenza Romano, Milen Kouylekov, Idan Szpektor, Ido Dagan, and Alberto Lavelli. 2006. Investigating a generic paraphrase-based approach for relation extraction. EACL. V´ıctor Manuel S´anchez S´anchez Valencia. 1991. Studies on natural logic and categorial grammar. Ph.D. thesis, University of Amsterdam. Stephen Soderland, Brendan Roof, Bo Qin, Shi Xu, Mausam, and Oren Etzioni. 2010. Adapting open information extraction to domain-specific relations. AI Magazine, 31(3):93–102. Stephen Soderland, John Gilmer, Robert Bart, Oren Etzioni, and Daniel S. Weld. 2013. Open information extraction to KBP relations in 3 hours. In Text Analysis Conference. Stephen G Soderland. 1997. Learning text analysis rules for domain-specific natural language processing. Ph.D. thesis, University of Massachusetts. Ang Sun, Ralph Grishman, Wei Xu, and Bonan Min. 2011. New York University 2011 system for KBP slot filling. In Proceedings of the Text Analytics Conference. Mihai Surdeanu. 2013. Overview of the tac2013 knowledge base population evaluation: English slot filling and temporal slot filling. In Sixth Text Analysis Conference. Johan van Benthem. 2008. A brief history of natural logic. Technical Report PP-2008-05, University of Amsterdam. Yotaro Watanabe, Junta Mizuno, Eric Nichols, Naoaki Okazaki, and Kentaro Inui. 2012. A latent discriminative model for compositional entailment relation recognition using natural logic. In COLING. Fei Wu and Daniel S Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of the sixteenth ACM conference on information and knowledge management. ACM. Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In ACL. Association for Computational Linguistics. Limin Yao, Sebastian Riedel, and Andrew McCallum. 2012. Probabilistic databases of universal schema. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Webscale Knowledge Extraction. Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. TextRunner: Open information extraction on the web. In ACL-HLT. 354
2015
34
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 355–364, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Joint Information Extraction and Reasoning: A Scalable Statistical Relational Learning Approach William Yang Wang Language Technologies Institute Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] William W. Cohen Machine Learning Department Carnegie Mellon University Pittsburgh, PA 15213, USA [email protected] Abstract A standard pipeline for statistical relational learning involves two steps: one first constructs the knowledge base (KB) from text, and then performs the learning and reasoning tasks using probabilistic first-order logics. However, a key issue is that information extraction (IE) errors from text affect the quality of the KB, and propagate to the reasoning task. In this paper, we propose a statistical relational learning model for joint information extraction and reasoning. More specifically, we incorporate context-based entity extraction with structure learning (SL) in a scalable probabilistic logic framework. We then propose a latent context invention (LCI) approach to improve the performance. In experiments, we show that our approach outperforms state-of-the-art baselines over three real-world Wikipedia datasets from multiple domains; that joint learning and inference for IE and SL significantly improve both tasks; that latent context invention further improves the results. 1 Introduction Information extraction (IE) is often an early stage in a pipeline that contains non-trivial downstream tasks, such as question answering (Moll´a et al., 2006), machine translation (Babych and Hartley, 2003), or other applications (Wang and Hua, 2014; Li et al., 2014). Knowledge bases (KBs) populated by IE techniques have also been used as an input to systems that learn rules allowing further inferences to be drawn from the KB (Lao et al., 2011), a task sometimes called KB completion (Socher et al., 2013; Wang et al., 2014; West et al., 2014). Pipelines of this sort frequently suffer from error cascades, which reduces performance of the full system1. In this paper, we address this issue, and propose a joint model system for IE and KB completion in a statistical relational learning (SRL) setting (Sutton and McCallum, 2006; Getoor and Taskar, 2007). In particular, we outline a system which takes as input a partially-populated KB and a set of relation mentions in context, and jointly learns: 1) how to extract new KB facts from the relation mentions, and; 2) a set of logical rules that allow one to infer new KB facts. Evaluation of the KB facts inferred by the joint system shows that the joint model outperforms its individual components. We also introduce a novel extension of this model called Latent Context Invention (LCI), which associates latent states with context features for the IE component of the model. We show that LCI further improves performance, leading to a substantial improvement over prior state-of-the-art methods for joint relation-learning and IE. To summarize our contributions: • We present a joint model for IE and relational learning in a statistical relational learning setting which outperforms universal schemas (Riedel et al., 2013), a state-of-theart joint method; • We incorporate latent context into the joint SRL model, bringing additional improvements. In next section, we discuss related work. We describe our approach in Section 3. The details of the datasets are introduced in Section 4. We show experimental results in Section 5, discuss in Section 6, and conclude in Section 7. 1For example, KBP slot filling is known for its complex pipeline, and the best overall F1 scores (Wiegand and Klakow, 2013; Angeli et al., 2014) for recent competitions are within the range of 30-40. 355 2 Related Work In NLP, our work clearly aligns with recent work on joint models of individual text processing tasks. For example, Finkel and Manning (2009) work on the problem of joint IE and parsing, where they use tree representations to combine named entities and syntactic chunks. Recently, Devlin et al. (Devlin et al., 2014) use a joint neural network model for machine translation, and obtain an impressive 6.3 BLEU point improvement over a hierarchical phrase-based system. In information extraction, weak supervision (Craven et al., 1999; Mintz et al., 2009) is a common technique for extracting knowledge from text, without large-scale annotations. In extracting Infobox information from Wikipedia text, Wu and Weld (2007; 2010) also use a similar idea. In an open IE project, Banko et al. (2007) use a seed KB, and utilize weak supervision techniques to extend it. Note that weakly supervised extraction approaches can be noisy, as a pair of entities in context may be associated with one, none, or several of the possible relation labels, a property which complicates the application of distant supervision methods (Mintz et al., 2009; Riedel et al., 2010; Hoffmann et al., 2011; Surdeanu et al., 2012). Lao et al. (2012) learned syntactic rules for finding relations defined by “lexico-semantic” paths spanning KB relations and text data. Wang et al. (2015) extends the methods used by Lao et al. to learn mutually recursive relations. Recently, Riedel et al. (2013) propose a matrix factorization technique for relation embedding, but their method requires a large amount of negative and unlabeled examples. Weston et al. (2013) connect text with KB embedding by adding a scoring term, though no shared parameters/embeddings are used. All these prior works make use of text and KBs. Unlike these prior works, our method is posed in an SRL setting, using a scalable probabilistic first-order logic, and allows learning of relational rules that are mutually recursive, thus allowing learning of multi-step inferences. Unlike some prior methods, our method also does not require negative examples, or large numbers of unlabeled examples. 3 Our Approach In this section, we first briefly review the semantics, inference, and learning procedures of a about(X,Z) :- handLabeled(X,Z) # base. about(X,Z) :- sim(X,Y),about(Y,Z) # prop. sim(X,Y) :- links(X,Y) # sim,link. sim(X,Y) :hasWord(X,W),hasWord(Y,W), linkedBy(X,Y,W) # sim,word. linkedBy(X,Y,W) :- true # by(W). Table 1: A simple program in ProPPR. See text for explanation. newly proposed scalable probabilistic logic called ProPPR (Wang et al., 2013; Wang et al., 2014). Then, we describe the joint model for information extraction and relational learning. Finally, a latent context invention theory is proposed for enhancing the performance of the joint model. 3.1 ProPPR: Background Below we will give an informal description of ProPPR, based on a small example. More formal descriptions can be found elsewhere (Wang et al., 2013). ProPPR (for Programming with Personalized PageRank) is a stochastic extension of the logic programming language Prolog. A simple program in ProPPR is shown in Table 1. Roughly speaking, the upper-case tokens are variables, and the “:-” symbol means that the left-hand side (the head of a rule) is implied by the conjunction of conditions on the right-hand size (the body). In addition to the rules shown, a ProPPR program would include a database of facts: in this example, facts would take the form handLabeled(page,label), hasWord(page,word), or linkedBy(page1,page2), representing labeled training data, a documentterm matrix, and hyperlinks, respectively. The condition “true” in the last rule is “syntactic sugar” for an empty body. In ProPPR, a user issues a query, such as “about(a,X)?”, and the answer is a set of possible bindings for the free variables in the query (here there is just one such varable, “X”). To answer the query, ProPPR builds a proof graph. Each node in the graph is a list of conditions R1, . . . , Rk that remain to prove, interpreted as a conjunction. To find the children of a node R1, . . . , Rk, you look for either 1. database facts that match R1, in which case the appropriate variables are bound, and R1 is removed from the list, or; 356 Figure 1: A partial proof graph for the query about(a,Z). The upper right shows the link structure between documents a, b, c, and d, and some of the words in the documents. Restart links are not shown. 2. a rule A ←B1, . . . , Bm with a head A that matches R1, in which case again the appropriate variables are bound, and R1 is replaced with the body of the rule, resulting in the new list B1, . . . , Bm, R2, . . . , Rk. The procedures for “matching” and “appropriately binding variables” are illustrated in Figure 1.2 An empty list of conditions (written 2 in the figure) corresponds to a complete proof of the initial query, and by collecting the required variable bindings, this proof can be used to determine an answer to the initial query. In Prolog, this proof graph is constructed onthe-fly in a depth-first, left-to-right way, returning the first solution found, and backtracking, if requested, to find additional solutions. In ProPPR, however, we will define a stochastic process on the graph, which will generate a score for each node, and hence a score for each answer to the query. The stochastic process used in ProPPR is personalized PageRank (Page et al., 1998; Csalogny et al., 2005), also known as random-walkwith-restart. Intuitively, this process upweights solution nodes that are reachable by many short proofs (i.e., short paths from the query node.) Formally, personalized PageRank is the fixed point of the iteration pt+1 = αχv0 + (1 −α)Wpt (1) 2The edge annotations will be discussed later. where p[u] is the weight assigned to u, v0 is the seed (i.e., query) node, χv0 is a vector with χv0[v0] = 1 and χv0[u] = 0 for u ̸= v, and W is a matrix of transition probabilities, i.e., W[v, u] is the probability of transitioning from node u to a child node v. The parameter α is the reset probability, and the transition probabilities we use will be discussed below. Like Prolog, ProPPR’s proof graph is also constructed on-the-fly, but rather than using depthfirst search, we use PageRank-Nibble, a fast approximate technique for incrementally exploring a large graph from a an initial “seed” node (Andersen et al., 2008). PageRank-Nibble takes a parameter ϵ and will return an approximation ˆp to the personalized PageRank vector p, such that each node’s estimated probability is within ϵ of correct. We close this background section with some final brief comments about ProPPR. Scalability. ProPPR is currently limited in that it uses memory to store the fact databases, and the proof graphs constructed from them. ProPPR uses a special-purpose scheme based on sparse matrix representations to store facts which are triples, which allows it to accomodate databases with hundreds of millions of facts in tens of gigabytes. With respect to run-time, ProPPR’s scalability is improved by the fast approximate inference scheme used, which is often an order of magnitude faster than power iteration for moderatesized problems (Wang et al., 2013). Experimen357 Figure 2: The data generation example as described in subsection 3.2. tation and learning are also sped up because with PageRank-Nibble, each query is answered using a “small”—size O( 1 αϵ)—proof graph. Many operations required in learning and experimentation can thus be easily parallized on a multi-core machine, by simply distributing different proof graphs to different threads. Parameter learning. Personalized PageRank scores are defined by a transition probability matrix W, which is parameterized as follows. ProPPR allows “feature generators” to be attached to its rules, as indicated by the code after the hashtags in the example program.3 Since edges in the proof graph correspond to rule matches, the edges can also be labeled by features, and a weighted combination of these features can be used to define a total weight for each edge, which finally can be normalized used to define the transition matrix W. Learning can be used to tune these weights to data; ProPPR’s learning uses a parallelized SGD method, in which inference on different examples is performed in different threads, and weight up3For instance, when matching the rule “sim(X,Y) :links(X,Y)” to a condition such as “sim(a,X)” the two features “sim” and “link” are generated; likewise when matching the rule “linkedBy(X,Y,W) :- true” to the condition “linkedBy(a,c,sprinter)” the feature “by(sprinter)” is generated. dates are synchronized. Structure learning. Prior work (Wang et al., 2014) has studied the problem of learning a ProPPR theory, rather than simply tuning parameters in an existing theory, a process called structure learning (SL). In particular, Wang et al. (2014) propose a scheme called the structural gradient which scores rules in some (possibly large) userdefined space R of potential rules, which can be viewed as instantiations of rule templates, such as the ones shown in the left-hand side of Table 2. For completeness, we will summarize briefly the approach used in (Wang et al., 2014). The space of potential rules R is defined by a “secondorder abductive theory”, which conceptually is an interpreter that constructs proofs using all rules in R. Each rule template is mapped to two clauses in the interpreter: one simulates the template (for any binding), and one “abduces” the specific binding (facts) from the KB. Associated with the use of the abductive rule is a feature corresponding to a particular binding for the template. The gradient of these features indicates which instantiated rules can be usefully added to the theory. More details can be found in (Wang et al., 2014). 358 Rule template ProPPR clause Structure learning (a) P(X,Y) :- R(X,Y) interp(P,X,Y) :- interp0(R,X,Y),abduce if(P,R). abduce if(P,R) :- true # f if(P,R). (b) P(X,Y) :- R(Y,X) interp(P,X,Y) :- interp0(R,Y,X),abduce ifInv(P,R). abduce ifInv(P,R) :- true # f ifInv(P,R). (c) P(X,Y) :- R1(X,Z),R2(Z,Y) interp(P,X,Y) :- interp0(R1,X,Z),interp0(R2,Z,Y), abduce chain(P,R1,R2). abduce chain(P,R1,R2) :- true # f chain(P,R1,R2). base case for SL interpreter interp0(P,X,Y) :- rel(R,X,Y). insertion point for learned rules interp0(P,X,Y) :- any rules learned by SL. Information extraction (d) R(X,Y) :- link(X,Y,W), interp(R,X,Y) :- link(X,Y,W),abduce indicates(W,R). indicates(W,R). abduce indicates(W,R) :- true #f ind1(W,R). (e) R(X,Y) :- link(X,Y,W1), interp(R,X,Y) :- link(X,Y,W1),link(X,Y,W2), link(X,Y,W2), abduce indicates(W1,W2,R). indicates(W1,W2,R). abduce indicates(W1,W2,R) :- true #f ind2(W1,W2,R). Latent context invention (f) R(X,Y) :- latent(L), interp(R,X,Y) :- latent(L),link(X,Y,W),abduce latent(W,L,R). link(X,Y,W), abduce latent(W,L,R) :- true #f latent1(W,L,R). indicates(W,L,R) (g) R(X,Y) :- latent(L1),latent(L2) interp(R,X,Y) :- latent(L1),latent(L2),link(X,Y,W), link(X,Y,W), abduce latent(W,L1,L2,R). indicates(W,L1,L2,R) abduce latent(W,L1,L2,R) :- true #f latent2(W,L1,L2,R). Table 2: The ProPPR template and clauses for joint structure learning and information extraction. 3.2 Joint Model for IE and SRL Dataset Generation The KBs and text used in our experiments were derived from Wikipedia. Briefly, we choose a set of closely-related pages from a hand-selected Wikipedia list. These pages define a set of entities E, and a set of commonlyused Infobox relations R between these entities define a KB. The relation mentions are hyperlinks between the pages, and the features of these relation mentions are words that appear nearby these links. This information is encoded in a single relation link(X,Y,W), which indicates that there is hyperlink between Wikipedia pages X to Y which is near the context word W. The Infobox relation triples are stored in another relation, rel(R,X,Y). 4 Figure 2 shows an example. We first find the “European royal families” to find a list of enti4In more detail, the extraction process was as follows. (1) We used a DBpedia dump of categories and hyperlink structure to find pages in a category; sometimes, this included crawling a supercategory page to find categories and then entities. (2) We used the DBpedia hyperlink graph to find the target entity pages, downloaded the most recent (2014) version of each of these pages, and collected relevant hyperlinks and anchor text, together with 80 characters of context to either side. ties E. This list contains the page “Louis VI of France”, the source entity, which contains an outlink to the target entity page “Philip I of France”. On the source page, we can find the following text: “Louis was born in Paris, the son of Philip I and his first wife, Bertha of Holland.” From Infobox data, we also may know of a relationship between the source and target entities: in this case, the target entity is the parent of the source entity. Theory for Joint IE and SL The structure learning templates we used are identical to those used in prior work (Wang et al., 2014), and are summarized by the clauses (a-c) in Table 2. In the templates in the left-hand side of the table, P, R, R1 and R2 are variables in the template, which will be bound to specific relations found to be useful in prediction. (The interpreter rules on the righthand side are provided for completeness, and can be ignored by readers not deeply familiar with the work of (Wang et al., 2014).) The second block of the table contains the templates used for IE. For example, to understand template (d), recall that the predicate link indicates a hyperlink from Wikipedia page X to 359 Y , which includes the context word W between two entities X and Y . The abductive predicate abduce indicates activates a feature template, in which we learn the degree of association of a context word and a relation from the training data. These rules essentially act as a trainable classifier which classifies entity pairs based on the hyperlinks they that contain them, and classifies the hyperlinks according to the relation they reflect, based on context-word features. Notice that the learner will attempt to tune word associations to match the gold rel facts used as training data, and that doing this does not require assigning labels to individual links, as would be done in a traditional distant supervision setting: instead these labels are essentially left latent in this model. Similar to “deep learning” approaches, the latent assignments are provided not by EM, but by hill-climbing search in parameter space. A natural extension to this model is to add a bilexical version of this classifier in clause (e), where we learn a feature which conjoins word W1, word W2, and relation R. Combining the clauses from (a) to (e), we derive a hybrid theory for joint SL and IE: the structure learning section involves a second-order probabilistic logic theory, where it searches the relational KB to form plausible first-order relational inference clauses. The information extraction section from (d) to (e) exploits the distributional similarity of contextual words for each relation, and extracts relation triples from the text, using distant supervision and latent labels for relation mentions (which in our case are hyperlinks). Training this theory as a whole trains it to perform joint reasoning to facts for multiple relations, based on relations that are known (from the partial KB) or inferred from the IE part of the theory. Both parameters for the IE portion of the theory and inference rules between KB relations are learned.5 Latent Context Invention Note that so far both the IE clauses (d-e) are fully observable: there are no latent predicates or variables. Recent work (Riedel et al., 2013) suggests that learning latent representations for words improves performance in predicting relations. Perhaps this is because such latent representations can better model the semantic information in surface forms, which are often ambiguous. 5In in addition to finding rules which instantiate the templates, weights on these rules are also learned. We call our method latent context invention (LCI), and it is inspired from literature in predicate invention (Kok and Domingos, 2007).6 LCI applies the idea of predicate invention to the context space: instead of inventing new predicates, we now invent a latent context property that captures the regularities among the similar relational lexical items. To do this, we introduce some additional rules of the form latent(1) :- true, latent(2) :- true, etc, and allow the learner to find appropriate weights for pairing these arbitrarily-chosen values with specific words. This is implemented by template (f) in Table 2. Adding this to the joint theory means that we will learn to map surfacelevel lexical items (words) to the “invented” latent context values and also to relation. Another view of LCI is that we are learning a latent embedding of words jointly with relations. In template (f) we model a single latent dimension, but to model higher-dimensional latent variables, we can add the clauses such as (g), which constructs a two-dimensional latent space. Below we will call this variant method hLCI. 4 Datasets Using the data generation process that we described in subsection 3.2, we extract two datasets from the supercategories of “European royal families” and “American people of English descent, and third geographic dataset using three lists: “List of countries by population”, “List of largest cities and second largest cities by country” and “List of national capitals by population”. For the royal dataset, we have 2,258 pages with 67,483 source-context-target mentions, and we use 40,000 for training, and 27,483 for testing. There are 15 relations7. In the American dataset, we have 679 pages with 11,726 mentions, and we use 7,000 for training, and 4,726 for testing. This dataset includes 30 relations8. As for the Geo dataset, there are 497 6To give some background on this nomenclature, we note that the SL method is inspired by Cropper and Muggleton’s Metagol system (Cropper and Muggleton, 2014), which includes predicate invention. In principle predicates could be invented by SL, by extending the interpreter to consider “invented” predicate symbols as binding to its template variables (e.g., P and R); however, in practice invented predicates leads to close dependencies between learned rules, and are highly sensitive to the level of noise in the data. 7birthPlace, child, commander, deathPlace, keyPerson, knownFor, monarch, parent, partner, predecessor, relation, restingPlace, spouse, successor, territory 8architect, associatedBand, associatedMusicalArtist, au360 pages with 43,475 mentions, and we use 30,000 for training, and 13,375 for testing. There are 10 relations9. The datasets are freely available for download at http://www.cs.cmu.edu/ ˜yww/data/jointIE+Reason.zip. 5 Experiments To evaluate these methods, we use the setting of Knowledge Base completion (Socher et al., 2013; Wang et al., 2014; West et al., 2014). We randomly remove a fixed percentage of facts in a training knowledge base, train the learner from the partial KB, and use the learned model to predict facts in the test KB. KB completion is a wellstudied task in SRL, where multiple relations are often needed to fill in missing facts, and thus reconstruct the incomplete KB. Following prior work (Riedel et al., 2013; Wang et al., 2013), we use mean average precision (MAP) as the evaluation metric. 5.1 Baselines To understand the performance of our joint model, we compare with three prior methods. Structure Learning (SL) includes the second-order relation learning templates (a-c) from Table 2. Information Extraction (IE) includes only templates (d) and (e). Markov Logic Networks (MLN) is the Alchemy’s implementation10 of Markov Logic Networks (Richardson and Domingos, 2006), using the first-order clauses learned from SL method11. We used conjugate gradient weight learning (Lowd and Domingos, 2007) with 10 iterations. Finally, Universal Schema is a state-of-the-art matrix factorization based universal method for jointly learning surface patterns and relations. We used the code and parameter settings for the best-performing model (NFE) from (Riedel et al., 2013). As a final baseline method, we considered a simpler approach to clustering context words, thor, birthPlace, child, cinematography, deathPlace, director, format, foundationOrganisation, foundationPerson, influenced, instrument, keyPerson, knownFor, location, musicComposer, narrator, parent, president, producer, relation, relative, religion, restingPlace, spouse, starring, successor, writer 9archipelago, capital, country, daylightSavingTimeZone, largestSettlement, leaderTitle, mottoFor, timeZone, twinCity, twinCountry 10http://alchemy.cs.washington.edu/ 11We also experimented with Alchemy’s structure learning, but it was not able to generate results in 24 hours. which we called Text Clustering, which used the following template: R(X,Y) :clusterID(C),link(X,Y,W), cluster(C,W),related(R,W). Here surface patterns are grouped to form latent clusters in a relation-independent fashion. 5.2 The Effectiveness of the Joint Model Our experimental results are shown in 3. The leftmost part of the table concerns the Royal dataset. We see that the universal schema approach outperforms the MLN baseline in most cases, but ProPPR’s SL method substantially improves over MLN’s conjugated gradient learning method, and the universal schema approach. This is perhaps surprising, as the universal schema approach is also a joint method: we note that in our datasets, unlike the New York Times corpus used in (Riedel et al., 2013), large numbers of unlabeled examples are not available. The unigram and bilexical IE models in ProPPR also perform well—better than SL on this data. The joint model outperforms the baselines, as well as the separate models. The difference is most pronounced when the background KB gets noisier: the improvement with 10% missing setting is about 1.5 to 2.3% MAP, while with 50% missing data, the absolute MAP improvement is from 8% to 10%. In the next few columns of Table 3, we show the KB completion results for the Geo dataset. This dataset has fewer relations, and the most common one is country. The overall MAP scores are much higher than the previous dataset. MLN’s results are good, but still generally below the universal schema method. On this dataset, the universal schema method performs better than the IE only model for ProPPR in most settings. However, the ProPPRjoint model still shows large improvements over individual models and the baselines: the absolute MAP improvement is 22.4%. Finally, in the rightmost columns of Table 3, we see that the overall MAP scores for the American dataset are relatively lower than other datasets, perhaps because it is the smallest of the three. The universal schema approach consistently outperforms the MLN model, but not ProPPR. On this dataset the SL-only model in ProPPR outperforms the IE-only models; however, the joint models still outperform individual ProPPR models from 1.5% to 6.4% in MAP. 361 Royal Geo American % missing 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% 10% 20% 30% 40% 50% Baselines MLN 60.8 43.7 44.9 38.8 38.8 80.4 79.2 68.1 66.0 68.0 54.0 56.0 51.2 41.0 13.8 Universal Schema 48.2 53.0 52.9 47.3 41.2 82.0 84.0 75.7 77.0 65.2 56.7 51.4 55.9 54.7 51.3 SL 79.5 77.2 74.8 65.5 61.9 83.8 80.4 77.1 72.8 67.2 73.1 70.0 71.3 67.1 61.7 IE only IE (U) 81.3 78.5 76.4 75.7 70.6 83.9 79.4 73.1 71.6 65.2 63.4 61.0 60.2 61.4 54.4 IE (U+B) 81.1 78.1 76.2 75.5 70.3 84.0 79.5 73.3 71.6 65.3 64.3 61.2 61.1 62.1 55.7 Joint SL+IE (U) 82.8 80.9 79.1 77.9 78.6 89.5 89.4 89.3 88.1 87.6 74.0 73.3 73.7 70.5 68.0 SL+IE (U+B) 83.4 82.0 80.7 79.7 80.3 89.6 89.6 89.5 88.4 87.7 74.6 73.5 74.2 70.9 68.4 Joint + Latent Joint + Clustering 83.5 82.3 81.2 80.2 80.7 89.8 89.6 89.5 88.8 88.4 74.6 73.9 74.4 71.5 69.7 Joint + LCI 83.5 82.5 81.5 80.6 81.1 89.9 89.8 89.7 89.1 89.0 74.6 74.1 74.5 72.3 70.3 Joint + LCI + hLCI 83.5 82.5 81.7 81.0 81.3 89.9 89.7 89.7 89.6 89.5 74.6 74.4 74.6 73.6 72.1 Table 3: The MAP results for KB completion on three datasets. U: unigram. B: bigram. Best result in each column is highlighted in bold. The averaged training runtimes on an ordinary PC for unigram joint model on the above Royal, Geo, American datasets are 38, 36, and 29 seconds respectively, while the average testing times are 11, 10, and 9 seconds. For bilexical joint models, the averaged training times are 25, 10, and 10 minutes respectively, whereas the testing times are 111, 28, and 26 seconds respectively. 5.3 The Effectiveness of LCI Finally we consider the latent context invention (LCI) approach. The last three rows of Table 3 show the performances of LCI and hHCI. We compare it here with the best previous approach, the joint IE + SL model, and text clustering approach. For the Royal dataset, first, the LCI and hLCI models clearly improve over joint IE and SL. In noisy conditions of missing 50% facts, the biggest improvement of LCI/hLCI is 2.4% absolute MAP. From the Geo dataset, we see that the joint models and joint+latent models have similar performances in relatively clean conditions (10%-30%) facts missing. However, in noisy conditions, we the LCI and hLCI model has an advantage of between 1.5% to 1.8% in absolute MAP. Finally, the results for the American dataset show a consistent trend: again, in noisy conditions (missing 40% to 50% facts), the latent context models outperform the joint IE + SL models by 2.9% and 3.7% absolute MAP scores. Although the LCI approach is inspired by predicate invention in inductive logic programming, our result is also consistent with theories of generalized latent variable modeling in probabilistic graphical models and statistics (Skrondal and Rabe-Hesketh, 2004): modeling hidden variables helps take into account the measurement (observation) errors (Fornell and Larcker, 1981) and results in a more robust model. 6 Discussions Compared to state-of-the-art joint models (Riedel et al., 2013) that learn the latent factor representations, our method gives strong improvements in performance on three datasets with various settings. Our model is also trained to retrieve a target entity from a relation name plus a source entity, and does not require large samples of unlabeled or negative examples in training. Another advantage of the ProPPR model is that they are explainable. For example, below are the features with the highest weights after joint learning from the Royal dataset, written as predicates or rules: indicates(“mother”,parent) indicates(“king”,parent) indicates(“spouse”,spouse) indicates(“married”,spouse) indicates(“succeeded”,successor) indicates(“son”,successor) parent(X,Y) :- successor(Y,X) successor(X,Y) :- parent(Y,X) spouse(X,Y) :- spouse(Y,X) parent(X,Y) :- predecessor(X,Y) successor(Y,X) :- spouse(X,Y) predecessor(X,Y) :- parent(X,Y) Here we see that our model is able to learn that the keywords “mother” and “king” that are indicators 362 of the relation parent, that the keywords “spouse” and “married” indicate the relation spouse, and the keywords “succeeded” and “son” indicate the relation successor. Interestingly, our joint model is also able to learn the inverse relation successor for the relation parent, as well as the similar relational predicate predecessor for parent. 7 Conclusions In this paper, we address the issue of joint information extraction and relational inference. To be more specific, we introduce a holistic probabilistic logic programming approach for fusing IE contexts with relational KBs, using locally groundable inference on a joint proof graph. We then propose a latent context invention technique that learns relation-specific latent clusterings for words. In experiments, we show that joint modeling for IE and SRL improves over prior state-of-the-art baselines by large margins, and that the LCI model outperforms various fully baselines on three realworld Wikipedia dataset from different domains. In the future, we are interested in extending these techniques to also exploit unlabeled data. Acknowledgment This work was sponsored in part by DARPA grant FA87501220342 to CMU and a Google Research Award. References Reid Andersen, Fan R. K. Chung, and Kevin J. Lang. 2008. Local partitioning for directed graphs using pagerank. Internet Mathematics, 5(1):3–22. Gabor Angeli, Julie Tibshirani, Jean Y Wu, and Christopher D Manning. 2014. Combining distant and partial supervision for relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Bogdan Babych and Anthony Hartley. 2003. Improving machine translation quality with automatic named entity recognition. In Proceedings of the 7th International EAMT workshop on MT and other Language Technology Tools, Improving MT through other Language Technology Tools: Resources and Tools for Building MT, pages 1–8. Association for Computational Linguistics. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction for the web. In IJCAI, volume 7, pages 2670–2676. Mark Craven, Johan Kumlien, et al. 1999. Constructing biological knowledge bases by extracting information from text sources. In ISMB, volume 1999, pages 77–86. Andrew Cropper and Stephen H Muggleton. 2014. Can predicate invention in meta-interpretive learning compensate for incomplete background knowledge? Proceedings of the 24th International Conference on Inductive Logic Programming. Kroly Csalogny, Dniel Fogaras, Balzs Rcz, and Tams Sarls. 2005. Towards scaling fully personalized PageRank: Algorithms, lower bounds, and experiments. Internet Mathematics, 2(3):333–358. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In 52nd Annual Meeting of the Association for Computational Linguistics, Baltimore, MD, USA, June. Jenny Rose Finkel and Christopher D Manning. 2009. Joint parsing and named entity recognition. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 326–334. Association for Computational Linguistics. Claes Fornell and David F Larcker. 1981. Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, pages 39–50. Lise Getoor and Ben Taskar. 2007. Introduction to statistical relational learning. MIT press. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledgebased weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesVolume 1, pages 541–550. Association for Computational Linguistics. Stanley Kok and Pedro Domingos. 2007. Statistical predicate invention. In Proceedings of the 24th international conference on Machine learning, pages 433–440. ACM. Ni Lao, Tom M. Mitchell, and William W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In EMNLP, pages 529– 539. ACL. Ni Lao, Amarnag Subramanya, Fernando C. N. Pereira, and William W. Cohen. 2012. Reading the web with learned syntactic-semantic inference rules. In EMNLP-CoNLL, pages 1017–1026. ACL. Jiwei Li, Alan Ritter, and Eduard Hovy. 2014. Weakly supervised user profile extraction from twitter. ACL. 363 Daniel Lowd and Pedro Domingos. 2007. Efficient weight learning for markov logic networks. In Knowledge Discovery in Databases: PKDD 2007, pages 200–211. Springer. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Diego Moll´a, Menno Van Zaanen, and Daniel Smith. 2006. Named entity recognition for question answering. Proceedings of ALTW, pages 51–58. Larry Page, Sergey Brin, R. Motwani, and T. Winograd. 1998. The PageRank citation ranking: Bringing order to the web. In Technical Report, Computer Science department, Stanford University. Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Mach. Learn., 62(12):107–136. Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In Machine Learning and Knowledge Discovery in Databases, pages 148–163. Springer. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of NAACL-HLT, pages 74–84. Anders Skrondal and Sophia Rabe-Hesketh. 2004. Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models. CRC Press. Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. Mihai Surdeanu, Julie Tibshirani, Ramesh Nallapati, and Christopher D Manning. 2012. Multi-instance multi-label learning for relation extraction. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 455– 465. Association for Computational Linguistics. Charles Sutton and Andrew McCallum. 2006. An introduction to conditional random fields for relational learning. Introduction to statistical relational learning, pages 93–128. William Yang Wang and Zhenhao Hua. 2014. A semiparametric gaussian copula regression model for predicting financial risks from earnings calls. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics (ACL 2014), Baltimore, MD, USA, June. ACL. William Yang Wang, Kathryn Mazaitis, and William W Cohen. 2013. Programming with personalized pagerank: a locally groundable first-order probabilistic logic. In Proceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2129–2138. ACM. William Yang Wang, Kathryn Mazaitis, and William W Cohen. 2014. Structure learning via parameter learning. Proceedings of the 23rd ACM International Conference on Information and Knowledge Management (CIKM 2014). William Yang Wang, Kathryn Mazaitis, Ni Lao, Tom Mitchell, and William W Cohen. 2015. Efficient inference and learning in a large knowledge base: Reasoning with extracted information using a locally groundable first-order probabilistic logic. Machine Learning Journal. Robert West, Evgeniy Gabrilovich, Kevin Murphy, Shaohua Sun, Rahul Gupta, and Dekang Lin. 2014. Knowledge base completion via search-based question answering. In Proceedings of the 23rd international conference on World wide web, pages 515– 526. International World Wide Web Conferences Steering Committee. Jason Weston, Antoine Bordes, Oksana Yakhnenko, Nicolas Usunier, et al. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1366–1371. Benjamin Roth Tassilo Barth Michael Wiegand and Mittul Singh Dietrich Klakow. 2013. Effective slot filling based on shallow distant supervision methods. Proceedings of NIST KBP workshop. Fei Wu and Daniel S Weld. 2007. Autonomously semantifying wikipedia. In Proceedings of the sixteenth ACM conference on Conference on information and knowledge management, pages 41–50. ACM. Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 118–127. Association for Computational Linguistics. 364
2015
35
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 365–375, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Knowledge-Intensive Model for Prepositional Phrase Attachment Ndapandula Nakashole Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, 15213 [email protected] Tom M. Mitchell Carnegie Mellon University 5000 Forbes Avenue Pittsburgh, PA, 15213 [email protected] Abstract Prepositional phrases (PPs) express crucial information that knowledge base construction methods need to extract. However, PPs are a major source of syntactic ambiguity and still pose problems in parsing. We present a method for resolving ambiguities arising from PPs, making extensive use of semantic knowledge from various resources. As training data, we use both labeled and unlabeled data, utilizing an expectation maximization algorithm for parameter estimation. Experiments show that our method yields improvements over existing methods including a state of the art dependency parser. 1 Introduction Machine reading and information extraction (IE) projects have produced large resources with many millions of facts (Suchanek et al., 2007; Mitchell et al., 2015). This wealth of knowledge creates a positive feedback loop for automatic knowledge base construction efforts: the accumulated knowledge can be leveraged to improve machine reading; in turn, improved reading methods can be used to better extract knowledge expressed using complex and potentially ambiguous language. For example, prepositional phrases (PPs) express crucial information that IE methods need to extract. However, PPs are a major source of syntactic ambiguity. In this paper, we propose to use semantic knowledge to improve PP attachment disambiguation. PPs such as “in”, “at”, and “for” express details about the where, when, and why of relations and events. PPs also state attributes of nouns. As an example, consider the following sentences: S1.) Alice caught the butterfly with the spots. S2.) Alice caught the butterfly with the net. S NP VP VP NP Alice caught butterfly PP with spots S1.) Noun attachment S NP VP VP NP PP Alice caught butterfly with net S2.) Verb attachment Figure 1: Parse trees where the prepositional phrase (PP) attaches to the noun, and to the verb. Relations Noun-Noun binary relations (Paris, located in, France) (net, caught, butterfly) Nouns Noun semantic categories (butterfly, isA, animal) Verbs Verb roles caught(agent, patient, instrument) Prepositions Preposition definitions f(for)= used for, has purpose, ... f(with)= has, contains, ... Discourse Context n0 ∈{n0, v, n1, p, n2} Table 1: Types of background knowledge used in this paper to determine PP attachment. S1 and S2 are syntactically different, this is evident from their corresponding parse trees in Figure 1. Specifically, S1 and S2 differ in where their PPs attach. In S1, the butterfly has spots and therefore the PP, “with the spots”, attaches to the noun. For relation extraction, we obtain a binary relation of the form: ⟨Alice⟩caught ⟨butterfly with spots⟩. However, in S2, the net is the instrument used for catching and therefore the PP, “with the net”, attaches to the verb. For relation extraction, we get a ternary extraction of the form: ⟨Alice⟩caught ⟨butterfly⟩with ⟨net⟩. The PP attachment problem is often defined as follows: given a PP occurring within a sentence where there are multiple possible attachment sites 365 0.4 0.5 0.6 0.7 0.8 0.9 WITH AT FROM FOR AS IN ON Figure 2: Dependency parser PP attachment accuracy for various frequent prepositions. for the PP, choose the most plausible attachment site. In the literature, prior work going as far back as (Brill and Resnik, 1994; Ratnaparkhi et al., 1994; Collins and Brooks, 1995) has focused on the language pattern that causes most PP ambiguities, which is the 4-word sequence: {v, n1, p, n2} (e.g., {caught, butterfly, with, spots}). The task is to determine if the prepositional phrase (p, n2) attaches to the verb v or to the first noun n1. Following common practice, we focus on PPs occurring as {v, n1, p, n2} quadruples — we shall refer to these as PP quads. The approach we present here differs from prior work in two main ways. First, we make extensive use of semantic knowledge about nouns, verbs, prepositions, pairs of nouns, and the discourse context in which a PP quad occurs. Table 1 summarizes the types of knowledge we considered in our work. Second, in training our model, we rely on both labeled and unlabeled data, employing an expectation maximization (EM) algorithm (Dempster et al., 1977). Contributions. In summary, our main contributions are: 1) Semantic Knowledge: Previous methods largely rely on corpus statistics. Our approach draws upon diverse sources of background knowledge, leading to performance improvements. 2) Unlabeled Data: In addition to training on labeled data, we also make use of a large amount of unlabeled data. This enhances our method’s ability to generalize to diverse data sets. 3) Datasets: In addition to the standard Wall Street Journal corpus (WSJ) (Ratnaparkhi et al., 1994), we labeled two new datasets for testing purposes, one from Wikipedia (WKP), and another from the New York Times Corpus (NYTC). We make these datasets freely available for fu0 0.25 0.5 0.75 1 IN FROM WITH FOR OF As AT ON Verb attachments Noun attachments Figure 3: Noun vs. verb attachment proportions for frequent prepositions in the labeled NYTC dataset. ture research. In addition, we have applied our model to over 4 million 5-tuples of the form {n0, v, n1, p, n2}, and we also make this dataset available1 for research into ternary relation extraction beyond spatial and temporal scoping. 2 State of the Art To quantitatively assess existing tools, we analyzed performance of the widely used Stanford parser2 as of 2014, and the established baseline algorithm (Collins and Brooks, 1995), which has stood the test of time. We first manually labeled PP quads from the NYTC dataset, then prepended the noun phrase appearing before the quad, effectively creating sentences made up of 5 lexical items (n0 v n1 p n2). We then applied the Stanford parser, obtaining the results summarized in Figure 2. The parser performs well on some prepositions, for example, “of”, which tends to occur with noun attaching PPs as can be seen in Figure 3. However, for prepositions with an even distribution over verb and noun attachments, such as “on”, precision is as low as 50%. The Collins baseline achieves 84% accuracy on the benchmark Wall Street Journal PP dataset. However, drawing a distinction in the precision of different prepositions provides useful insights on its performance. We re-implemented this baseline and found that when we remove the trivial preposition, “of”, whose PPs are by default attached to the noun by this baseline, precision drops to 78%. This analysis suggests there is substantial room for improvement. 1http://rtw.ml.cmu.edu/resources/ppa 2http://nlp.stanford.edu:8080/parser/ 366 3 Related Work Statistics-based Methods. Prominent prior methods learn to perform PP attachment based on corpus co-occurrence statistics, gathered either from manually annotated training data (Collins and Brooks, 1995; Brill and Resnik, 1994) or from automatically acquired training data that may be noisy (Ratnaparkhi, 1998; Pantel and Lin, 2000). These models collect statistics on how often a given quadruple, {v, n1, p, n2}, occurs in the training data as a verb attachment as opposed to a noun attachment. The issue with this approach is sparsity, that is, many quadruples occuring in the test data might not have been seen in the training data. Smoothing techniques are often employed to overcome sparsity. For example, (Collins and Brooks, 1995) proposed a back-off model that uses subsets of the words in the quadruple, by also keeping frequency counts of triples, pairs and single words. Another approach to overcoming sparsity has been to use WordNet (Fellbaum, 1998) classes, by replacing nouns with their WordNet classes (Stetina and Nagao, 1997; Toutanova et al., 2004) to obtain less sparse corpus statistics. Corpus-derived clusters of similar nouns and verbs have also been used (Pantel and Lin, 2000). Hindle and Rooth proposed a lexical association approach based on how words are associated with each other (Hindle and Rooth, 1993). Lexical preference is used by computing co-occurrence frequencies (lexical associations) of verbs and nouns, with prepositions. In this manner, they would discover that, for example, the verb “send” is highly associated with the preposition from, indicating that in this case, the PP is likely to be a verb attachment. Structure-based Methods. These methods are based on high-level observations that are then generalized into heuristics for PP attachment decisions. (Kimball, 1988) proposed a right association method, whose premise is that a word tends to attach to another word immediately to its right. (Frazier, 1978) introduced a minimal attachment method, which posits that words attach to an existing non-terminal word using the fewest additional syntactic nodes. While simple, in practice these methods have been found to perform poorly (Whittemore et al., 1990). Rule-based Methods. (Brill and Resnik, 1994) proposed methods that learn a set of transformation rules from a corpus. The rules can be too specific to have broad applicability, resulting in low recall. To address low recall, knowledge about nouns, as found in WordNet, is used to replace certain words in rules with their WordNet classes. Parser Correction Methods. The quadruples formulation of the PP problem can be seen as a simplified setting. This is because, with quadruples, there is no need to deal with complex sentences but only well-defined quadruples of the form {v, n1, p, n2}. Thus in the quadruples setting, there are only two possible attachment sites for the PP, the v and n1. An alternative setting is to work in the context of full sentences. In this setting the problem is cast as a dependency parser correction problem (Atterer and Sch¨utze, 2007; Agirre et al., 2008; Anguiano and Candito, 2011). That is, given a dependency parse of a sentence, with potentially incorrect PP attachments, rectify it such that the prepositional phrases attach to the correct sites. Unlike our approach, these methods do not take semantic knowledge into account. Sense Disambiguation. In addition to prior work on prepositional phrase attachment, a highly related problem is preposition sense disambiguation (Hovy et al., 2011; Srikumar and Roth, 2013). Even a syntactically correctly attached PP can still be semantically ambiguous with respect to questions of machine reading such as where, when, and why. Therefore, when extracting information from prepositions, the problem of preposition sense disambiguation (semantics) has to be addressed in addition to prepositional phrase attachment disambiguation (syntax). In this paper, our focus is on the latter. 4 Methodology Our approach consists of first generating features from background knowledge and then training a model to learn with these features. The types of features considered in our experiments are summarized in Table 2. The choice of features was motivated by our empirically driven characterization of the problem as follows: (Verb attach) −→v ⟨has-slot-filler⟩n2 (Noun attach a.) −→n1 ⟨described-by⟩n2 (Noun attach b.) −→n2 ⟨described-by⟩n1 367 Feature Type # Feature Example Noun-Noun Binary Relations Source: SVOs F1. svo(n2, v, n1) For q1; (net, caught, butterfly) F2. ∀i : ∃svio; svo(n1, vi, n2) For q2; (butterfly, has, spots) For q2; (butterfly, can see, spots) Noun Semantic Categories Source: T F3. ∀ti ∈T ; isA(n1, ti) For q1 isA(butterlfy, animal) F4. ∀ti ∈T ; isA(n2, ti) For q2 isA(net, device) Verb Role Fillers Source: VerbNet F5. hasRole(n2, ri) For q1; (net, instrument) Preposition Relational Source: M Definitions F6. def(prep, vi) ∀i : ∃svio; vi ∈M ∧ svo(n1, vi, n2) For q2; def(with, has) Discourse Features Source: Sentence(s), T F7. ∀ti ∈T ; isA(n0, ti) n0 ∈{n0, v, n1, p, n2} Lexical Features Source: PP quads For q1; F8. (v, n1, p, n2) (caught, butterfly, with, net) F9. (v, n1, p) (caught, butterfly, with) F10. (v, p, n2) (caught, with, net) F11. (n1, p, n2) (butterfly, with, net) F12. (v, p) (caught, with) F13. (n1, p) (butterfly, with) F14. (p, n2) (with, net) F15. (p) (with) Table 2: Types of features considered in our experiments. All features have values of 1 or 0. The PP quads used as running examples are: q1 = {caught, butterfly, with, net} : V , q2 = {caught, butterfly, with, spots} : N. That is, we found that for verb-attaching PPs, n2 is usually a role filler for the verb, e.g., the net fills the role of an instrument for the verb catch. On the other hand, for noun-attaching PPs, one noun describes or elaborates on the other. In particular, we found two kinds of noun attachments. For the first kind of noun attachment, the second noun n2 describes the first noun n1, for example n2 might be an attribute or property of n1, as in the spots(n2) are an attribute of the butterfly (n1). And for the second kind of noun attachment, the first noun n1 describes the second noun n2, as in the PP quad {expect, decline, in, rates}, where the PP “in rates”, attaches to the noun. The decline:n1 that is expected:v is in the rates:n2. We sampled 50 PP quads from the WSJ dataset and found that every labeling could be explained using our characterization. We make this labeling available with the rest of the datasets. We next describe in more detail how each type of feature is derived from the background knowledge in Table 1. 4.1 Feature Generation We generate boolean-valued features for all the feature types we describe in this section. 4.1.1 Noun-Noun Binary Relations The noun-noun binary relation features, F1-2 in Table 2, are boolean features svo(n1, vi, n2) (where vi is any verb) and svo(n2, v, n1) (where v is the verb in the PP quad, and the roles of n2 and n1 are reversed). These features describe diverse semantic relations between pairs of nouns (e.g., butterfly-has-spots, clapton-playedguitar). To obtain this type of knowledge, we dependency parsed all sentences in the 500 million English web pages of the ClueWeb09 corpus, then extracted subject-verb-object (SVO) triples from these parses, along with the frequency of 368 each SVO triple in the corpus. The value of any given feature svo(n1, vi, n2) is defined to be 1 if that SVO triple was found at least 3 times in these SVO triples, and 0 otherwise. To see why these relations are relevant, let us suppose that we have the knowledge that butterfly-hasspots, svo(n1, vi, n2). From this, we can infer that the PP in {caught, butterfly, with, spots} is likely to attach to the noun. Similarly, suppose we know that net-caught-butterfly, svo(n2, v, n1). The fact that a net can be used to catch a butterfly can be used to predict that the PP in {caught, butterfly, with, net} is likely to attach to the verb. 4.1.2 Noun Semantic Categories Noun semantic type features, F3-4, are boolean features isA(n1, ti) and isA(n2, ti) where ti is a noun category in a noun categorization scheme T such as WordNet classes. Knowledge about semantic types of nouns, for example that a butterfly is an animal, enables extrapolating predictions to other PP quads that contain nouns of the same type. We ran experiments with several noun categorizations including WordNet classes, knowledge base ontological types, and an unsupervised noun categorization produced by clustering nouns based on the verbs and adjectives with which they co-occur (distributional similarity). 4.1.3 Verb Role Fillers The verb role feature, F5, is a boolean feature hasRole(n2, ri) where ri is a role that n2 can fulfill for the verb v in the PP quad, according to background knowledge. Notice that if n2 fills a role for the verb, then the PP is a verb attachment. Consider the quad {caught, butterfly, with, net}, if we know that a net can play the role of an instrument for the verb catch, this suggests a likely verb attachment. We obtained background knowledge of verbs and their possible roles from the VerbNet lexical resource (Kipper et al., 2008). From VerbNet we obtained 2, 573 labeled sentences containing PP quads (verbs in the same VerbNet group are considered synonymous), and the labeled semantic roles filled by the second noun n2 in the PP quad. We use these example sentences to label similar PP quads, where similarity of PP quads is defined by verbs from the same VerbNet group. 4.1.4 Preposition Definitions The preposition definition feature, F6, is a boolean feature def(prep, vi) = 1 if ∃vi ∈ M ∧svo(n1, vi, n2) = 1, where M is a definition mapping of prepositions to verb phrases. This mapping defines prepositions, using verbs in our ClueWeb09 derived SVO corpus, in order to capture their senses using verbs; it contains definitions such as def(with, *) = contains, accompanied by, ... . If “with” is used in the sense of “contains” , then the PP is a likely noun attachment, as in n1 contains n2 in the quad ate, cookies, with, cranberries. However, if “with” is used in the sense of “accompanied by”, then the PP is a likely verb attachment, as in the quad visted, Paris, with, Sue. To obtain the mapping, we took the labeled PP quads (WSJ, (Ratnaparkhi et al., 1994)) and computed a ranked list of verbs from SVOs, that appear frequently between pairs of nouns for a given preposition. Other sample mappings are: def(for,*)= used for, def(in,*)= located in. Notice that this feature F6 is a selective, more targeted version of F2. 4.1.5 Discourse and Lexical Features The discourse feature, F7, is a boolean feature isA(n0, ti), for each noun category ti found in a noun category ontology T such as WordNet semantic types. The context of the PP quad can contain relevant information for attachment decisions. We take into account the noun preceding a PP quad, in particular, its semantic type. This in effect makes the PP quad into a PP 5-tuple: {n0, v, n1, p, n2}, where the n0 provides additional context. Finally, we use lexical features in the form of PP quads, features F8-15. To overcome sparsity of occurrences of PP quads, we also use counts of shorter sub-sequences, including triples, pairs and singles. We only use sub-sequences that contain the preposition, as the preposition has been found to be highly crucial in PP attachment decisions (Collins and Brooks, 1995). 4.2 Disambiguation Algorithm We use the described features to train a model for making PP attachment decisions. Our goal is to compute P(y|x), the probability that the PP (p, n2) in the tuple {v, n1, p, n2} attaches to the verb (v) , y = 1 or to the noun(n1), y = 0, given 369 a feature vector x describing that tuple. As input to training the model, we are given a collection of PP quads, D where di ∈D : di = {v, n1, p, n2}. A small subset, Dl ⊂D is labeled data, thus for each di ∈Dl we know the corresponding yi. The rest of the quads, Du, are unlabeled, hence their corresponding yis are unknown. From each PP quad di, we extract a feature vector xi according to the feature generation process discussed in Section 4.1. 4.2.1 Model To model P(y|x), there a various possibilities. One could use a generative model (e.g., Naive Bayes) or a discriminative model ( e.g., logistic regression). In our experiments we used both kinds of models, but found the discriminative model performed better. Therefore, we present details only for our discriminative model. We use the logistic function: P(y|x, ⃗θ) = e⃗θx 1+e⃗θx , where ⃗θ is a vector of model parameters. To estimate these parameters, we could use the labeled data as training data and use standard gradient descent to minimize the logistic regression cost function. However, we also leverage the unlabeled data. 4.2.2 Parameter Estimation To estimate model parameters based on both labeled and unlabeled data, we use an Expectation Maximization (EM) algorithm. EM estimates model parameters that maximize the expected log likelihood of the full (observed and unobserved) data. Since we are using a discriminative model, our likelihood function is a conditional likelihood function: L(θ) = N X i=1 ln P(yi|xi) = N X i=1 yiθT xi −ln (1 + exp(θT xi)) (1) where i indexes over the N training examples. The EM algorithm produces parameter estimates that correspond to a local maximum in the expected log likelihood of the data under the posterior distribution of the labels, given by: arg max θ Ep(y|x,θ)[ln P(y|x, θ)]. In the E-step, we use the current parameters θt−1 to compute the posterior distribution over the y labels, give by P(y|x, θt−1). We then use this posterior distribution to find the expectation of the log of the complete-data conditional likelihood, this expectation is given by Q(θ, θt−1), defined as: Q(θ, θt−1) = N X i=1 Eθt−1[ln P(y|x, θ)] (2) In the M-step, a new estimate θt is then produced, by maximizing this Q function with respect to θ: θt = arg max θ Q(θ, θt−1) (3) EM iteratively computes parameters θ0, θ1, ...θt, using the above update rule at each iteration t, halting when there is no further improvement in the value of the Q function. Our algorithm is summarized in Algorithm 1. The M-step solution for θt is obtained using gradient ascent to maximize the Q function. Algorithm 1 The EM algorithm for PP attachment Input: X, D = Dl ∪Du Output: θT for t = 1 . . . T do E-Step: Compute p(y|xi, θt−1) xi : di ∈Du; p(y|xi, ⃗θ) = e⃗θx 1+e⃗θx xi : di ∈Dl; p(y|xi) = 1 if y = yi, else 0 M-Step: Compute new parameters, θt θt = arg max θ Q(θ, θt−1) Q(θ, θt−1) = N X i=1 X y∈{0,1} p(y|xi, θt−1)× (yθT xi −ln(1 + exp(θT xi))) if convergence(L(θ), L(θt−1)) then break end if end for return θT 5 Experimental Evaluation We evaluated our method on several datasets containing PP quads of the form {v, n1, p, n2}. The task is to predict if the PP (p, n2) attaches to the verb v or to the first noun n1. 5.1 Experimental Setup Datasets. Table 3 shows the datasets used in our experiments. As labeled training data, we used the 370 DataSet # Training quads # Test quads Labeled data WSJ 20,801 3,097 NYTC 0 293 WKP 0 381 Unlabeled data WKP 100,000 4,473,072 Table 3: Training and test datasets used in our experiments. PPAD PPADCollStanNB ins ford WKP 0.793 0.740 0.727 0.701 WKP 0.759 0.698 0.683 0.652 \of NYTC 0.843 0.792 0.809 0.679 NYTC 0.815 0.754 0.774 0.621 \of WSJ 0.843 0.816 0.841 N\A WSJ 0.779 0.741 0.778 N\A \of Table 4: PPAD vs. baselines. Wall Street Journal (WSJ) dataset. For the unlabeled training data, we extracted PP quads from Wikipedia (WKP) and randomly selected 100, 000 which we found to be a sufficient amount of unlabeled data. The largest labeled test dataset is WSJ but it is also made up of a large fraction, of “of” PP quads, 30% , which trivially attach to the noun, as already seen in Figure 3. The New York Times (NYTC) and Wikipedia (WKP) datasets are smaller but contain fewer proportions of “of” PP quads, 15%, and 14%, respectively. Additionally, we applied our model to over 4 million unlabeled 5-tuples from Wikipedia. We make this data available for download, along with our manually labeled NYTC and WKP datasets. For the WKP & NYTC corpora, each quad has a preceding noun, n0, as context, resulting in PP 5-tuples of the form: {n0, v, n1, p, n2}. The WSJ dataset was only available to us in the form of PP quads with no other sentence information. Methods Under Comparison. 1) PPAD (Prepositional Phrase Attachment Disambiguator) is our proposed method. It uses diverse types of semantic knowledge, a mixture of labeled and unlabeled data for training data, a logistic regression classi0.5 0.58 0.66 0.74 0.82 0.9 WKP WKP\of NYTC NYTC\of WSJ WSJ\of PPAD - WordNet Types PPAD - KB Types PPAD - Unsupervised Types PPAD - WordNet Verbs PPAD - Naive Bayes Collins Baseline Stanford Parser Figure 4: PPAD variations vs. baselines. fier, and expectation maximization (EM) for parameter estimation 2) Collins is the established baseline among PP attachment algorithms (Collins and Brooks, 1995). 3) Stanford Parser is a stateof-the-art dependency parser, the 2014 online version. 4) PPAD Naive Bayes(NB) is the same as PPAD but uses a generative model, as opposed to the discriminative model used in PPAD. 5.2 PPAD vs. Baselines Comparison results of our method to the three baselines are shown in Table 4. For each dataset, we also show results when the “of” quads are removed, shown as “WKP\of”, “NYTC\of”, and “WSJ\of”. Our method yields improvements over the baselines. Improvements are especially significant on the datasets for which no labeled data was available (NYTC and WKP). On WKP, our method is 7% and 9% ahead of the Collins baseline and the Stanford parser, respectively. On NYTC, our method is 4% and 6% ahead of the Collins baseline and the Stanford parser, respectively. On WSJ, which is the source of the labeled data, our method is not significantly better than the Collins baseline. We could not evaluate the Stanford parser on the WSJ dataset. The parser requires well-formed sentences which we could not generate from the WSJ dataset as it was only available to us in the form of PP quads with no other sentence information. For the same reason, we could not generate discourse features,F7, for the WSJ PP quads. For the NYTC and WKP datasets, we generated well-formed short sentences containing only the PP quad and the noun preceding it. 371 Feature Type Precision Recall F1 Noun-Noun Binary Relations (F1-2) low high low Noun Semantic Categories (F3-4) high high high Verb Role Fillers (F5) high low low Preposition Definitions (F6) low low low Discourse Features (F7) high low high Lexical Features (F8-15) high high high Table 5: An approximate characterization of feature knowledge sources in terms of precision/recall/F1 5.3 Feature Analysis We found that features F2 and F6 did not improve performance, therefore we excluded them from the final model, PPAD. This means that binary noun-noun relations were not useful when used permissively, feature F2, but when used selectively, feature F1, we found them to be useful. Our attempt at mapping prepositions to verb definitions produced some noisy mappings, resulting in feature F6 producing mixed results. To analyze the impact of the unlabeled data, we inspected the features and their weights as produced by the PPAD model. From the unlabeled data, new lexical features were discovered that were not in the original labeled data. Some sample new features with high weights for verb attachments are: (perform,song,for,*), (lose,*,by,*), (buy,property,in,*). And for noun attachments: (*,conference,on,*), (obtain,degree,in,*), (abolish,taxes,on,*). We evaluated several variations of PPAD, the results are shown in Figure 4. For “PPADWordNet Verbs”, we expanded the data by replacing verbs in PP quads with synonymous WordNet verbs, ignoring verb senses. This resulted in more instances of features F1, F8-10, & F12. We also used different types of noun categorizations: WordNet classes, semantic types from the NELL knowledge base (Mitchell et al., 2015) and unsupervised types. The KB types and the unsupervised types did not perform well, possibly due to the noise found in these categorizations. WordNet classes showed the best results, hence they were used in the final PPAD model for features F3-4 & F7. In Section 5.1, PPAD corresponds to the best model. 5.4 Discussion: The F1 Score of Knowledge Why did we not reach 100% accuracy? Should relational knowledge not be providing a much bigger performance boost than we have seen in the results? To answer these questions, we characterize our features in terms precision and recall, and F1 measure of their knowledge sources in Table 5. A low recall feature means that the feature does not fire on many examples, the feature’s knowledge source suffers from low coverage. A low precision feature means that when it fires, the feature could be incorrect, the feature’s knowledge source contains a lot of errors. From Table 5, the noun-noun binary relation features (F1 −2) have low precision, but high recall. This is because the SVO data, extracted from the ClueWeb09 corpus, that we used as our relational knowledge source is very noisy but it is high coverage. The low precision of the SVO data causes these features to be detrimental to performance. Notice that when we used a filtered version of the data, in feature F2, the data was no longer detrimental to performance. However, the F2 feature is low recall, and therefore it’s impact on performance is also limited. The noun semantic category features (F3−4) have high recall and precision, hence it to be expected that their impact on performance is significant. The verb role filler features (F5), obtained from VerbNet have high precision but low recall, hence their marginal impact on performance is also to be expected. The preposition definition features (F6) poor precision made them unusable. The discourse features (F7) are based noun semantic types and lexical features (F8−15), both of which have high recall and precision, hence they useful impact on performance. In summary, low precision in knowledge is detrimental to performance. In order for knowledge to make even more significant contributions to language understanding, high precision, high recall knowledge sources are required for all features types. Success in ongoing efforts in knowledge base construction projects, will make performance of our algorithm better. 372 Relation Prep. Attachment accuracy Example(s) acquired from 99.97 BNY Mellon acquired Insight from Lloyds. hasSpouse in 91.54 David married Victoria in Ireland. worksFor as 99.98 Shubert joined CNN as reporter. playsInstrument with 98.40 Kushner played guitar with rock band Weezer. Table 6: Binary relations extended to ternary relations by mapping to verb-preposition pairs in PP 5tuples. PPAD predicted verb attachments with accuracy >90% in all relations. 5.5 Application to Ternary Relations Through the application of ternary relation extraction, we further tested PPAD’s PP disambiguation accuracy and illustrated its usefulness for knowledge base population. Recall that a PP 5-tuple of the form {n0, v, n1, p, n2}, whose enclosed PP attaches to the verb v, denotes a ternary relation with arguments n0, n1, & n2. Therefore, we can extract a ternary relation from every 5-tuple for which our method predicts a verb attachment. If we have a mapping between verbs and binary relations from a knowledge base (KB), we can extend KB relations to ternary relations by augmenting the KB relations with a third argument n2. We considered four KB binary relations and their instances such as worksFor(TimCook, Apple), from the NELL KB. We then took the collection of 4 million 5-tuples that we extracted from Wikipedia. We mapped verbs in 5-tuples to KB relations, based on significant overlaps in the instances of the KB relations, noun pairs such as (TimCook, Apple) with the n0, n1 pairs in the Wikipedia PP 5-tuple collection. We found that, for example, instances of the noun-noun KB relation “worksFor” match n0, n1 pairs in tuples where v = joined and p = as , with n2 referring to the job title. Other binary relations extended are: “hasSpouse” extended by “in” with wedding location, “acquired” extended by “from” with the seller of the company being acquired. Examples are shown in Table 6. In all these mappings, the proportion of verb attachments in the corresponding PP quads is significantly high ( > 90%). PPAD is overwhelming making the right attachment decisions in this setting. Efforts in temporal and spatial relation extraction have shown that higher N-ary relation extraction is challenging. Since prepositions specify details that transform binary relations to higher Nary relations, our method can be used to read information that can augment binary relations already in KBs. As future work, we would like to incorporate our method into a pipeline for reading beyond binary relations. One possible direction is to read details about the where,why, who of events and relations, effectively moving from extracting only binary relations to reading at a more general level. 6 Conclusion We have presented a knowledge-intensive approach to prepositional phrase (PP) attachment disambiguation, which is a type of syntactic ambiguity. Our method incorporates knowledge about verbs, nouns, discourse, and noun-noun binary relations. We trained a model using labeled data and unlabeled data, making use of expectation maximization for parameter estimation. Our method can be seen as an example of tapping into a positive feedback loop for machine reading, which has only become possible in recent years due to the progress made by information extraction and knowledge base construction techniques. That is, using background knowledge from existing resources to read better in order to further populate knowledge bases with otherwise difficult to extract knowledge. As future work, we would like to use our method to extract more than just binary relations. Acknowledgments We thank Shashank Srivastava and members of the NELL team at CMU for helpful comments. This research was supported by DARPA under contract number FA8750-13-2-0005. 373 References Eneko Agirre, Timothy Baldwin, and David Martinez. 2008. Improving parsing and PP attachment performance with sense information. In Proceedings of ACL-08: HLT, pages 317–325. Gerry Altmann and Mark Steedman. 1988. Interaction with context during human sentence processing. Cognition, 30:191–238. Enrique Henestroza Anguiano and Marie Candito. 2011. Parse correction with specialized models for difficult attachment types. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, EMNLP, pages 1222–1233. Michaela Atterer and Hinrich Sch¨utze. 2007. Prepositional phrase attachment without oracles. Computational Linguistics, 33(4):469–476. S¨oren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary G. Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11-15, 2007., pages 722–735. Michele Banko, Michael J Cafarella, Stephen Soderland, Matthew Broadhead, and Oren Etzioni. 2007. Open information extraction for the web. In IJCAI, volume 7, pages 2670–2676. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD ’08, pages 1247–1250. Eric Brill and Philip Resnik. 1994. A rule-based approach to prepositional phrase attachment disambiguation. In 15th International Conference on Computational Linguistics, COLING, pages 1198– 1204. Andrew Carlson, Justin Betteridge, Richard C. Wang, Estevam R. Hruschka, Jr., and Tom M. Mitchell. 2010. Coupled semi-supervised learning for information extraction. In Proceedings of the Third ACM International Conference on Web Search and Data Mining, WSDM ’10, pages 101–110. Michael Collins and James Brooks. 1995. Prepositional phrase attachment through a backed-off model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 27–38. Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the International Conference on Language Recources and Evaluation (LREC, pages 449–454. Luciano Del Corro and Rainer Gemulla. 2013. Clausie: Clause-based open information extraction. In Proceedings of the 22Nd International Conference on World Wide Web, WWW ’13, pages 355– 366. A. P. Dempster, N. M. Laird, and D. B. Rubin. 1977. Maximum likelihood from incomplete data via the em algorithm. Journal of the Royal Statistical Society, Series B, 39(1):1–38. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011a. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1535–1545. Anthony Fader, Stephen Soderland, and Oren Etzioni. 2011b. Identifying relations for open information extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1535–1545. Association for Computational Linguistics. Christiane Fellbaum, editor. 1998. WordNet: an electronic lexical database. MIT Press. Lyn Frazier. 1978. On comprehending sentences: Syntactic parsing strategies. Ph.D. thesis, University of Connecticut. Sanda M. Harabagiu and Marius Pasca. 1999. Integrating symbolic and statistical methods for prepositional phrase attachment. In Proceedings of the Twelfth International Florida Artificial Intelligence Research Society ConferenceFLAIRS, pages 303– 307. Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19(1):103–120. Dirk Hovy, Ashish Vaswani, Stephen Tratz, David Chiang, and Eduard Hovy. 2011. Models and training for unsupervised preposition sense disambiguation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, pages 323–328. John Kimball. 1988. Seven principles of surface structure parsing in natural language. Cognition, 2:15– 47. Karin Kipper, Anna Korhonen, Neville Ryant, and Martha Palmer. 2008. A large-scale classification of english verbs. Language Resources and Evaluation, 42(1):21–40. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics,ACL, pages 423–430. 374 Ni Lao, Tom Mitchell, and William W Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 529–539. Association for Computational Linguistics. Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics, ACL, pages 236–244. Tom M. Mitchell, William W. Cohen, Estevam R. Hruschka Jr., Partha Pratim Talukdar, Justin Betteridge, Andrew Carlson, Bhavana Dalvi Mishra, Matthew Gardner, Bryan Kisiel, Jayant Krishnamurthy, Ni Lao, Kathryn Mazaitis, Thahir Mohamed, Ndapandula Nakashole, Emmanouil Antonios Platanios, Alan Ritter, Mehdi Samadi, Burr Settles, Richard C. Wang, Derry Tanti Wijaya, Abhinav Gupta, Xinlei Chen, Abulhair Saparov, Malcolm Greaves, and Joel Welling. 2015. Never-ending learning. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence, January 2530, 2015, Austin, Texas, USA., pages 2302–2310. Ndapandula Nakashole and Tom M. Mitchell. 2014. Language-aware truth assessment of fact candidates. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 1009–1019. Ndapandula Nakashole and Gerhard Weikum. 2012. Real-time population of knowledge bases: opportunities and challenges. In Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction, pages 41–45. Association for Computational Linguistics. Ndapandula Nakashole, Martin Theobald, and Gerhard Weikum. 2011. Scalable knowledge harvesting with high precision and high recall. In Proceedings of the Fourth ACM International Conference on Web Search and Data Mining, WSDM ’11, pages 227– 236. Ndapandula Nakashole, Tomasz Tylenda, and Gerhard Weikum. 2013. Fine-grained semantic typing of emerging entities. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL, pages 1488–1497. Kamal Nigam, Andrew McCallum, Sebastian Thrun, and Tom M. Mitchell. 2000. Text classification from labeled and unlabeled documents using EM. Machine Learning, 39(2/3):103–134. Patrick Pantel and Dekang Lin. 2000. An unsupervised approach to prepositional phrase attachment using contextually similar words. In 38th Annual Meeting of the Association for Computational Linguistics, ACL. Adwait Ratnaparkhi, Jeff Reynar, and Salim Roukos. 1994. A maximum entropy model for prepositional phrase attachment. In Proceedings of the Workshop on Human Language Technology, HLT ’94, pages 250–255. Adwait Ratnaparkhi. 1998. Statistical models for unsupervised prepositional phrase attachement. In 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, COLING-ACL, pages 1079–1085. Vivek Srikumar and Dan Roth. 2013. Modeling semantic relations expressed by prepositions. TACL, 1:231–242. Jiri Stetina and Makoto Nagao. 1997. Prepositional phrase attachment through a backed-off model. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 66–80. Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowledge. In Proceedings of the 16th international conference on World Wide Web, pages 697–706. ACM. Kristina Toutanova, Christopher D. Manning, and Andrew Y. Ng. 2004. Learning random walk models for inducing word dependency distributions. In Machine Learning, Proceedings of the Twenty-first International Conference, ICML. Olga van Herwijnen, Antal van den Bosch, Jacques M. B. Terken, and Erwin Marsi. 2003. Learning PP attachment for filtering prosodic phrasing. In 10th Conference of the European Chapter of the Association for Computational Linguistics,EACL, pages 139–146. Greg Whittemore, Kathleen Ferrara, and Hans Brunner. 1990. Empirical study of predictive powers od simple attachment schemes for post-modifier prepositional phrases. In 28th Annual Meeting of the Association for Computational Linguistics,ACL, pages 23–30. Derry Wijaya, Ndapandula Nakashole, and Tom Mitchell. 2014. Ctps: Contextual temporal profiles for time scoping facts via entity state change detection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Shaojun Zhao and Dekang Lin. 2004. A nearestneighbor method for resolving pp-attachment ambiguity. In Natural Language Processing - First International Joint Conference, IJCNLP, pages 545–554. 375
2015
36
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 376–386, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Convolution Kernel Approach to Identifying Comparisons in Text Maksim Tkachenko School of Information Systems Singapore Management University [email protected] Hady W. Lauw School of Information Systems Singapore Management University [email protected] Abstract Comparisons in text, such as in online reviews, serve as useful decision aids. In this paper, we focus on the task of identifying whether a comparison exists between a specific pair of entity mentions in a sentence. This formulation is transformative, as previous work only seeks to determine whether a sentence is comparative, which is presumptuous in the event the sentence mentions multiple entities and is comparing only some, not all, of them. Our approach leverages not only lexical features such as salient words, but also structural features expressing the relationships among words and entity mentions. To model these features seamlessly, we rely on a dependency tree representation, and investigate the applicability of a series of tree kernels. This leads to the development of a new context-sensitive tree kernel: Skip-node Kernel (SNK). We further describe both its exact and approximate computations. Through experiments on real-life datasets, we evaluate the effectiveness of our kernel-based approach for comparison identification, as well as the utility of SNK and its approximations. 1 Introduction When weighing various alternatives, users increasingly turn to the social media, by scouring online reviews, discussion forums, etc. Our goal is to extract from such corpora those text snippets where users make direct comparisons of entities. While sentiment analysis (Pang and Lee, 2008) may be helpful in evaluating individual entities, comparison by the same author within a sentence provides an unambiguous and more equitable basis for the relative positions of two entities on some aspect. For example, the sentence s1 in Table 1, taken from an Amazon review about a digital camera, makes two distinct comparisons: #1) between “A630” and “A-series cameras” and #2) between “A630” and “its competition”, with a clear sense of which entity mention is the greater on some aspect (“larger”). Moreover, comparisons may be objective (e.g., larger) or subjective (e.g., better), while sentiments are primarily subjective. Problem Given a sentence and a specific pair of entity mentions, we seek to determine if a comparison exists between those two mentions. In previous work, the problem was formulated as identifying comparative sentences, i.e., those containing at least one comparison (Jindal and Liu, 2006a). This is not ideal because a sentence may contain more than two entity mentions, and may be comparing only some of them. For instance, s1 is comparative with respect to the pair (A630, A-series cameras) and the pair (A630, its competition), but not the pair (A-series cameras, its competition). We therefore postulate that the more appropriate formulation is comparisons within sentences. If a sentence compares two entities (A, B) with respect to some aspect Z, it should be possible to reformulate it into another sentence such as: “A is better than B with respect to Z” (Kessler and Kuhn, 2014a). Based on this definition, there is no comparison between (A-series cameras, its competition) in s1. Here, we adopt this apt definition with a slight restriction to make it more practical, and seek to identify such comparisons automatically. We consider only sentences with at least two entity mentions involved in gradable comparisons, i.e., a clear sense of scaling in the comparison (e.g., A is better than B.). Such comparisons are more useful in investigating the pros and cons of entities, as opposed to equative comparisons expressing parity between two mentions (e.g., A is as good as B.), or superlative comparisons expressing the primacy of an entity with respect to unknown reference entities (e.g., A is the best.). 376 ID Sentence Remarks s1 The A630 is slightly larger than previous generation Aseries cameras, and also larger than much of its competition. Contains two comparisons: (A630, A-series cameras) and (A630, its competition). s2 I got 30D for my wife because she wanted a better camera. Includes comparative predicate “better”, but contains no comparison. s3 I had D3100 and it was nice but the D5100 is truly amazing. No comparative predicate, but has a comparison: (D3100, D5100). s4 D7000 and D7100 do better at high ISO than D300s. Contains two comparisons: (D7000, D300s) and (D7100, D300s). Table 1: Example Sentences with ≥2 Entity Mentions from Amazon.com Digital Cameras Reviews Approach For English, there usually is a comparative predicate that anchors a comparison, such as “better” or “worse”. However, many sentences with such predicate words are not comparisons. The sentence s2 in Table 1 has the word “better”, but does not contain any comparison between the entity mentions. Yet, other words (e.g., “amazing”), though not a comparative predicate, could signify a comparison, e.g., in s3 in Table 1. (Jindal and Liu, 2006a) considered the “context” around a predicate. A sentence is transformed into a sequence involving the predicate and the part of speech (POS) within a text window around the predicate (usually three words before and after). For instance, s2 in Table 1 would be transformed into the sequence ⟨PRP VBD DT better NN⟩. Such sequences are labeled comparative or non-comparative, upon which (Jindal and Liu, 2006a) applies sequential pattern mining (Agrawal and Srikant, 1995; Ayres et al., 2002; Pei et al., 2001) to learn class sequential rule (CSR). These CSRs are then used as features in classifying comparative sentences. While (Jindal and Liu, 2006a) makes some progress by considering context, its performance may be affected by several factors. First, CSRs are not sensitive to entity mentions. It may classify s1 as comparative generally, missing the nuance that s1 is not comparing the pair (A-series cameras, its competition). Second, as CSRs requires a list of comparative predicates, the quality and the completeness of the list are crucial. For instance, “amazing” is not in their list, and thus the comparison in s3 may not be identifiable. Third, due to the windowing effect, CSRs has a limited ability to model long-range dependencies. For s4, a window of three words around the predicate “better” excludes the word “than” that would have been very informative. Yet, enlarging the window might then bring in irrelevant associations. What is important then is not so much whether a sentence is comparative as whether two entity mentions are related by a comparative relation. One insight we draw is how comparison identification is effectively a form of relation extraction. While there are diverse relation extraction formulations (Culotta and Sorensen, 2004; Bunescu and Mooney, 2005; Nguyen et al., 2009), our distinct relation type is comparison of two entity mentions. Armed with this insight, we propose a kernelbased approach based on a dependency tree representation (Nivre, 2005), with significant innovations motivated by the comparative identification task. This proposed approach has several advantages over CSR. Most importantly, it models dependencies between any pair of words (including entity mentions), whereas CSR only relates a comparative predicate to nearby POS tags. For other advantages, unlike CSR, this approach is contingent on neither a pre-specified list of comparative predicates, nor a specific window length. Contributions In this paper, we make the following contributions. First, we re-formulate the problem of automatic identification of comparative sentences into the more general task of identifying comparisons within sentences. Second, we propose to frame comparison identification as a relation extraction problem. This entails: #1) deriving an appropriate dependency tree representation of sentences to enable discrimination of comparison vs. non-comparison within the same sentence (see Section 2), and #2) a systematic exploration of the applicability of various tree kernel spaces to our task (see Section 3). Third, due to the limitation of the existing tree kernels, we propose a new tree kernel: Skip-node Kernel that is contextsensitive, and discuss both its exact and approximate computations (see Section 4). Fourth, we validate its effectiveness and efficiency through experiments on real-life datasets (see Section 5). 377 2 Overview Task The input is a corpus of sentences S concerning a set of entities within a certain domain (e.g., digital cameras). Every sentence s ∈S contains at least two entity mentions. The set of entity mentions in s is denoted Ms. For instance, the sentence s4 in Table 1 contains three entity mentions: D7000, D7100, and D300s. The same entity may be mentioned more than once in a sentence, in which case every mention is a distinct instance. As output, we seek to determine, for each pair of entity mentions (mi < mj) ∈Ms in a sentence s ∈S, a binary class label of whether s contains a comparison between mi and mj. For the pair (D7000, D7100) in s4, the correct class is 0 (no comparison). For the other two pairs (D7000, D300s) and (D7100, D300s), the correct class is 1 (comparisons). We do not seek to identify the aspect of comparison, which is a different problem of independent research interest (see Section 6). Dependency Tree In order to represent both the lexical units (words) as well their structural dependencies seamlessly, we represent each sentence s as a dependency tree T. For example, Figure 1(a) shows the dependency tree of s4 in Table 1. The tree is rooted at the main verb (“do”), and each dependency relation associates a head word and a dependent word. To describe a tree or any of its substructures, we use the bracket notation. Figure 1(a) in this notation is [do [D7000 [and] [D7100]] [better [at [ISO [high]]] [than [D300s]]]]. Here, we make two observations. First, there is one tree even for a sentence with multiple pairs of entity mentions. Second, the information signalling a comparison is borne by the structures around the mentions (e.g., [better [than]], rather than the actual mentions (e.g., “D7000”). These lead us to introduce a modified dependency tree that is distinct for every pair of mentions, achieved by replacing each entity mention of interest by a placeholder token. Here, we use the token “#camera” for illustration. Figure 1(b) shows the modified tree for the pair (D7000, D7100). This enables learning in an entity-agnostic way, because the token ensures that sentences about different cameras are interpreted similarly. Convolution Kernel Observe how the trees of the pair (D7000, D300s) in Figure 1(c) and the pair (D7100, D300s) in Figure 1(d), which are both comparisons, share certain substructures, such D7000 and D7100 do better at high ISO than D300s (a) original dependency tree #camera and #camera do better at high ISO than D300s (b) modified dependency tree for (D7000, D7100) #camera and D7100 do better at high ISO than #camera (c) modified dependency tree for (D7000, D300s) D7000 and #camera do better at high ISO than #camera (d) modified dependency tree for (D7100, D300s) Figure 1: Modified dependency trees. as [do [better [than [#camera]]]. In contrast, the tree in Figure 1(b) for the pair (D7000, D7100), which is not a comparison, does not contain this substructure. What we need is a way to systematically examine tree substructures to determine the similarity between two trees. Kernel methods offer a way to measure the similarity by exploring an implicit feature space without enumerating all substructures explicitly. Suppose that T denotes the space of all possible instances. A kernel function K is a symmetric and positive semidefinite function that maps the instance space T × T to a real value in the range of [0, ∞) (Haussler, 1999). A tree kernel function can be reformulated into a convolution kernel (Collins and Duffy, 2001), shown in Equation 1. K(T1, T2) = X ni∈T1 X nj∈T2 D(ni, nj) (1) Here, ni and nj denote each node in their respective tree instances T1 and T2. D(ni, nj) is the number of common substructure instances between the two sub-trees rooted in ni and nj respectively. The exact form of D(ni, nj) depends on the specific definition of the tree kernel space. In Section 3, we systematically explore the applicability of various tree kernel spaces, leading to the introduction of the new Skip-node Kernel. The appropriate kernel function can be embedded seamlessly in kernel methods for classification. In this work, we use the Support Vector Machines (SVM) (Steinwart and Christmann, 2008). 378 3 Tree Kernel Spaces Tree kernels count substructures of a tree in some high-dimensional feature space. Different tree kernel spaces vary in the amount and the type of information they can capture, and thus may suit different purposes. To find a suitable tree kernel for the comparison identification task, we first systematically explore a progression of known tree kernel spaces, including Sub-tree, Subset Tree, and Partial Tree. Through the use of appropriate examples, we show how these existing tree kernel spaces may not be appropriate for certain instances. This section culminates in the introduction of a new feature space that we call Skip-node. Sub-tree (ST) Space In this space, the basic substructure is a subgraph formed by a node along with all its descendants. Applying this kernel to two dependency trees of similar sentences may not be appropriate due to, for example, modifier words that change the dependency structure. To illustrate this, let us examine the two dependency parses in Figure 2. Both support comparisons, and ideally we can detect some level of similarity. However, if we consider only sub-trees, the two dependency trees share in common only two fragments: [#camera] and [is]. Neither of these fragments is indicative of a comparison. #camera is better than #camera (a) (b) Figure 2: Dependency parses. Working example for the Sub-tree, Subset Tree, Partial Tree kernels. Subset Tree (SST) Space We next consider the SST kernel, which computes similarity in a more general space of substructures than ST. Any subgraph of a tree that preserves production rules is counted. This definition suggests SST is intended more for a constituency parse (Moschitti, 2006a). In this feature space, the parses in Figure 2 now have in common the following fragments: [#camera], [is], [than [#camera]]. This representation is better than ST’s, e.g., the fragment [than [#camera]] is informative. However, as a whole, the set of features are still insufficient to identify a comparison. #camera is twice as expensive as #camera (a) Previously I had D60 and D7100 and #camera is twice as good as #camera (b) Previously I had D60 and #camera and this camera is twice as good as #camera (c) Figure 3: Dependency parses. Working example for the Partial Tree, Skip-node kernels. Partial Tree (PT) Space In turn, the PT space allows breaking of production rules, making it a better choice than SST for dependency parses. PT kernel would find that the parse in Figure 2(a) with all its subgraphs can be matched as a whole within the parse in Figure 2(b), identifying a close match. However, PT kernel is prone to two drawbacks. By generating an exponential feature space, it may overfit and degrade generalization (Cumby and Roth, 2003). More importantly, PT considers tree fragments independently from their contexts, resulting in features involving non-related parts of a sentence. This is particularly apparent when we consider multiple entities within a sentence. Suppose that Figure 3(a) is in our training set, and we have the sentence below in the testing set: Previously, I had D60 and D7100, and this camera is twice as good as D60. Figure 3(b) shows the parse for (this camera, D60), and Figure 3(c) for (D7100, D60). The former is a comparison, and should match Figure 3(a). The latter is not and should not match. PT kernel cannot resolve this ambiguity, computing the same similarity value to Figure 3(a) for both. The common features are: [#camera], [is], [twice], [as], and [as [#camera]]. Skip-node (SN) Space Figures 3(a) and 3(b) share a similar substructure “twice as ... as”, but because they use different words to express the comparisons (“expensive” vs. “good”), previous kernels treat their features disjointly, missing out on their similarity. To reduce this over-reliance on exact word similarity, we seek a feature space that 379 #camera is twice as as #camera (a) Previously I had D60 and D7100 and #camera is twice as as #camera (b) Figure 4: Dependency parses with skipped nodes. would allow some degree of relaxation in determining the structural similarity between trees. We therefore propose the Skip-node (SN) space, which represents a generalized space of tree fragments, where some nodes can be “skipped” or relabeled to a special symbol ‘*’ that would match nodes of any label. A restriction on this space is that each skip symbol must connect two non-skip (regular) nodes. The implication is that skips code for some notion of connecting distance between non-skip nodes. Moreover, the space would not include features such as [* [* [#camera]]] that serve only to indicate the presence of ancestors, and not any relationship of non-skip nodes. Figure 4 resolves the ambiguity in Figure 3 by skipping the words “expensive” and “good”, introducing a new set of features: [* [#camera] [is] [twice] [as] [as [#camera]]]. Note how in this case the skip symbol effectively serves as a “context” that pulls together the previously disjoint features identified by the PT kernel. These new context-sensitive features would allow a match between the earlier Figures 3(a) and 3(b), but not Figure 3(c). Thus, SN space effectively generalizes over the PT space, and enriches it with context-sensitive features. To avoid overfitting, in addition to decay parameter λ used in PT kernel, we associate SN kernel with two other parameters. The SN space consists of rooted ordered trees where some nodes are labeled with a special skip symbol ‘*’, such that the number of regular nodes (not marked with ‘*’) is at most S, and each skip node is within a distance of L from a non-skip node. This engenders a graceful gradation of similarity as the number of skip nodes in a substructure grows, yet imposes a limit to the extent of relaxation. 4 Skip-node Kernel Computation We now discuss the computation of Skip-node Kernel, first exactly, and thereafter approximately. 4.1 Exact Computation We define the alignment of common fragments between two trees in the Skip-node space. When S = 1, only singleton nodes with the same labels contribute to the kernel, and alignment is straightforward. When aligning fragments with two regular nodes (S > 1), we consider their connection structure and the order of the child nodes to prevent over-counting substructures with the same labels (e.g., [*[as][as]] in Figure 4). To preserve the natural order of words in a sentence, we enumerate the tree nodes according to preorder, left-to-right depth-first search (DFS) traversal. In turn, the connection structure is defined by the skip-node path connecting two regular nodes. This can be expressed as a sequence of upward (towards the root) and downward (towards the leaves) steps we need to perform to get from the leftmost to the rightmost regular node. Due to the natural ordering of regular nodes, upward steps are followed by downward steps. The sequence can be expressed as a pair of numbers: ⟨ρ(nl, u), ρ(nr, u)⟩, where nl is the leftmost regular node of a fragment, nr is the rightmost one, u = σ(nl, nr) is the lowest common ancestor of nodes nl, nr, and ρ returns the number of edges in the shortest path connecting two nodes. Suppose a rooted tree T = (N, E) has preorder DFS enumeration N = (n1, n2, ..., n|N|). For i < j, we define a function π(ni, nj), which canonically represents the way two nodes are connected in a tree, as follows: π(ni, nj) = ⟨ρ(ni, σ(ni, nj)), ρ(nj, σ(ni, nj))⟩. DEFINITION 1 (STRUCTURAL ISOMORPHISM): Given two trees T1 = (N1, E1), T2 = (N2, E2), we say that pairs of nodes (vi, ui′), (vj, uj′) ∈N1 × N2 are structurally isomorphic and write (vi, ui′) ↭(vj, uj′) when π(vi, vj) = π(ui′, uj′) on the valid domain. It can be shown that structural isomorphism is a transitive relation. This property allows us to grow aligned fragments by adding one node at a time: (vi, ui′) ↭(vj, uj′) ∧(vj, vj′) ↭(vk, uk′) ⇒ (vi, ui′) ↭(vk, uk′). 380 To compute the kernel, we use a graph-based approach to enumerate all the common substructures in the Skip-node space. Given two trees T1 and T2, we begin by aligning their nodes. The sets of nodes in T1 and T2 are N1 and N2 respectively. Let NG be a set of pairs (ni, nj) ∈N1×N2, where ni and nj have the same label. On top of NG, we build a graph G = (NG, EG). We draw an edge between two vertices (vi, vk), (uj, ul) ∈NG, if (vi, uj) ↭(vk, ul) and ρ(vi, vk) ≤L. Any connected subgraph of G represents a feature in the Skip-node space common to both T1 and T2. The kernel then needs to count the number of connected subgraphs of sizes not more than S. To see that this procedure is correct, we simply need to trace back the construction of graph G, and build an bijection from a subgraph of G to the corresponding fragments of T1 and T2. Enumerating all the connected subgraphs of a given graph requires exponential time. The algorithm described above requires O(|N1||N2| + PS i=1 |NG| i  ) time, assuming that the distance between two nodes in a tree can be computed in O(1) with appropriate linear preprocessing. See (Bender and Farach-Colton, 2000) for insight. The exact computation is still tractable on the condition that S and L are not very large. This condition would probably hold in most realistic scenarios. Yet, to improve the practicality of the kernel, we propose a couple of approximations as follows. 4.2 Approximate Computation One reason for the complexity of the Skip-node kernel is that although the graph G is formed by aligning two trees, by allowing connections through skips, G itself may not necessarily be in the form of a tree. In deriving an approximation, our strategy is to form G through alignment of linear substructures of the original two trees. A Skipnode space over linear structures can be computed in polynomial time using dynamic programming. Linear Skip-node One approximation is to consider linear substructures in the form of rootpaths. A root-path is a path from the root of a tree to a leaf. Given two trees T1 and T2, with DFS enumerated nodes N1 = (v1, v2, ..., vm1) and N2 = (u1, u2, ..., um2) respectively. Here, v1 and u1 are roots, and vm1 and um2 are the leaves. Starting with common fragments at the leaves, we grow them into larger common fragments towards the root. We call this approximation Linear Skipnode. Figure 5(a) shows examples of features considered by Linear Skip-node for the illustrated tree T in skip-node space (S = 3, L = 2). The kernel function can be decomposed into: K(T1, T2) = X vi∈N1 X uj∈N2 S X s=1 λsD(vi, uj, s), where D(vi, uj, s) is the number of common substructures of size s with the leftmost regular nodes vi and uj. λ is a decay factor for substructure size. The recursive definition of the kernel is: D(vi, uj, s) = X i<k≤m1 X j<l≤m2 I(vi, vk, uj, ul)D(vk, ul, s −1), D(vi, uj, 1) = ( 1 if label(vi) = label(uj), 0 otherwise; I(vi, vk, uj, ul) = 1(vi,uj)↭(vk,ul) 1ρ(vi,vk)≤L · 1(vi is an ancestor of vk), where 1c equals 1 when constraint c is satisfied and 0 otherwise. Note that the first two factors of indicator function I just represent the general Skip-node space constraints, the last factor ensures that features are computed along the root-paths. Lookahead Skip-node The second approximation, Lookahead Skip-node, is related to the observation that when growing a substructure, we do not have to confine the growth only towards ancestors, as DFS traversal already ensures iterative manner of computation. In other words, the constraint vi is an ancestor of vk can be dropped: I(vi, vk, uj, ul) = 1(vi,uj)↭(vk,ul) · 1ρ(vi,vk)≤L. In addition to those features generated by Linear Skip-node in Figure 5(a), Lookahead Skip-node can generate additional tree substructures, shown in Figure 5(b). The approximation can be computed using different DFS enumerations, which may result in different feature sets. In our experiments, we used pre-order left-to-right enumeration. Given the enumeration of tree T as in Figure 5, we start to grow feature fragments from node n4. According to the Skip-node space constraints, the growth can only proceed to nodes n1 or n2. Once any of these nodes is attached to n4, we lose tree fragments containing n3, as the procedure allows us to grow substructures only towards nodes with smaller (earlier) DFS enumer381 Figure 5: Features of T in skip-node space (S = 3, L = 2). Numbers indicates pre-order left-to-right DFS enumeration of T. Dashed circles represent skip nodes. Subfigures: (a) - modeled by all; (b) modeled by Lookahead Skip-node, not by Linear Skip-node; (c) - modeled only by Exact Skip-node. Domain # sentences % comp. # pairs % comp. Camera 1716 59.4% 2170 49.9% Cell 821 35.2% 1110 30.5% Table 2: The dataset size for each domain. ation numbers. Figure 5(c) shows the fragments that Lookahead Skip-node cannot capture1. The computation procedure is similar for both approximations and requires O(S|N1|2|N2|2). 5 Experiments Data For experiments, we compiled two annotated datasets in two domains: Digital Camera and Cell Phone from online review sentences. The reviews were collected from Amazon and Epinions2. We identified the entity mentions through dictionary matching, followed by manual annotation to weed out false positives. Each dictionary entry is a product name (e.g., Canon PowerShot D20, D7100) or a common product reference (e.g., this camera, that phone). The dataset includes only sentences that contain at least two entity mentions. Every pair of entities within a sentence was annotated with a comparative label according to the definition given in Section 2. A sentence is comparative if at least one pair of entities within it is in a comparative relation. Table 2 shows the dataset properties, in terms of the number sentences and the percentage that are comparative sentences, as well as the number of pairs of entity mentions and the percentage that are comparative relations. There are more pairs than sentences, i.e., many sentences mention more than two entities. This dataset subsumes the annotated gradable 1In this particular case, all features could have been computed by Lookahead Skip-node using preorder right-to-left DFS enumeration, although it may not be true in general. 2We used already available snapshots for Epinions dataset: http://groups.csail.mit.edu/rbg/code/precis/. Camera Cell P R F1 P R F1 CSR 74.3 52.3 61.3 48.9 61.5∗ 54.3 BoW 76.9 76.3 76.6 62.2 58.0 59.8 BoW† 77.3 71.9 74.4 69.0 56.3 61.8 SNK 80.5∗ 75.2 77.7∗∗ 77.2∗ 55.1 64.1∗ Table 3: Comparison identification task comparisons of (Kessler and Kuhn, 2014a) derived from Epinions reviews on Digital Cameras. (Jindal and Liu, 2006a)’s dataset is inapplicable, due to its lack of entity-centric comparison. Evaluation The experiments were carried out with SVM-light-TK framework3 (Joachims, 1999; Moschitti, 2006b), into which we built Skip-node Kernel. We further release a separate standalone library that we built, called Tree-SVM4, which does SVM optimization using the tree kernels described in this paper. The sentences were parsed and lemmatized with the use of the Stanford NLP software (Chen and Manning, 2014). The experiments were done on 10 random data splits in 80:20 proportion of training vs. testing. Performance is measured by using F1, which is the harmonic mean of precision P and recall R: F1 = 2PR P+R. The statistical significance5 is measured by randomization test (Yeh, 2000). The hyper-parameters, including the baselines’, were optimized for F1 through grid-search. 5.1 Comparison Identification Our first and primary objective is to investigate the effectiveness of the proposed approach on the task of identifying comparisons between a pair of en3http://disi.unitn.it/moschitti/Tree-Kernel.htm 4http://github.com/sitfoxfly/tree-svm 5When presenting the results, an asterisk indicates that the outperformance over the second-best result is significant at 0.05 level. Two asterisks indicate the same at 0.1 level. 382 Camera Cell P R F1 P R F1 CSR 74.6 51.7 60.9 50.9 61.2∗ 55.3 BoW 77.5 76.3 76.8 63.4 57.7 60.2 BoW† 77.6 72.4 74.9 70.9 57.3 63.2 SNK 81.0∗ 75.2 78.0∗∗ 77.9∗ 54.8 64.2 Table 4: Comparative sentence identification task tity mentions. Previous work focused on identifying comparative sentences. We compare to three baselines. One is CSR, implemented following the description in (Jindal and Liu, 2006a). Another is BoW, classification using bag-of-words as features. For the baselines, if a comparative sentence contains more than one pair of entities, we assume that every pair is in comparative relation. The third baseline, BoW†, considers only the words in between of the two target entities. Table 3 shows the performance on the comparison identification task (best results are in bold). In terms of F1, it is evident that SNK outperforms the baselines. This is achieved through significant gains in precision. It is expected that the baselines tend to have a high recall. CSR benefits from the human-constructed predefined list of comparative keywords and key phrases that a kernel-based method is unable to learn from a training split. BoW† tends to have a higher precision than the other baselines, as it is able to distinguish between different pairs of entities within one sentence. While SNK may have an inherent advantage over CSR or BoW due to its entity orientation, to investigate the effectiveness of the method itself, we now compare them on the previous task of comparative sentence identification. Table 4 shows that even in this task, SNK still performs better than the baselines. Comparing Table 3 and Table 4, the results also concur with the intuition: once we fold up multiple entity pairs in a sentence into a comparative sentence, we observe a drop in recall and an increase in precision. 5.2 Tree Kernel Spaces Our second objective is to explore the progression of feature spaces discussed in Section 3. Table 5 reports the results on comparison identification task. The F1 columns show that the performance gradually increases from STK to SNK along with the increase in the complexity of feature space. PTK and SNK can be considered highCamera Cell P R F1 P R F1 STK 67.5 64.0 64.9 43.7 41.9 42.6 SSTK 72.1 72.6 71.8 79.6 42.4 54.9 PTK 79.2 74.9 76.9 72.3 56.0∗∗ 62.7 SNK 80.5∗75.2 77.7∗∗ 77.2 55.1 64.1∗ Table 5: Tree kernels Camera Cell P R F1 P R F1 STKBoW 79.9 65.1 71.7 77.5 45.3 56.8 SSTKBoW 78.0 73.5 75.6 71.8 54.5 61.6 PTKBoW 78.6 74.1 76.2 71.0 53.8 60.8 SNK 80.5 75.2∗∗77.7∗ 77.2 55.1 64.1∗∗ Table 6: Tree kernels combined with bag-of-words variance estimators due to the power of their feature spaces. The data is such that these kernels may not have fully modeled the feature space completely enough to show even sharper differences. SNK’s parameters were optimized to non-trivial cases (S > 1 and L > 1) by the grid-search, i.e., S = 3 and L = 2 for Digital Camera and S = 2 and L = 3 for Cell Phone. The trivial case S = 1 represents a standard bag-of-words feature space, i.e., this space is embedded into Skip-node space whenever S > 1. To show that SNK does not merely take advantage of this simple space to compete with structural kernels, we carried out another experiment where we combined STK, SSTK, and PTK with bag-of-word representation of a sentence. Table 6 shows that surprisingly this combination harms the quality of PTK. STK and SSTK gain more from bag-of-words features. Nevertheless, the overall outperformance by SNK remains. 5.3 Skip-node Kernel Approximations Our third objective is to study the utility of the approximations of SNK described in Section 4. Table 7 reports the performance of the approximations. For Camera, the performance of Lookahead Camera Cell P R F1 P R F1 Linear SNK 78.9 77.1∗77.9 71.8 55.3 62.2 Lookahead SNK 80.5 75.2 77.7 71.8 55.3 62.2 SNK 80.5 75.2 77.7 77.2∗ 55.1 64.1 Table 7: Effectiveness: SNK vs. approximations 383 1 2 3 4 5 6 7 8 9 10 S: Size of a Substructure 100 101 102 103 104 CPU Seconds L = 3 SNK Linear SNK Lookahead SNK (a) L = 3, S ∈1..10 1 2 3 4 5 6 7 8 9 10 L: Length of a Skip 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0 CPU Seconds S = 3 SNK Linear SNK Lookahead SNK (b) S = 3, L ∈1..10 Figure 6: Efficiency: SNK vs. approximations SNK and SNK are the same. In turn, Linear SNK represents more restricted features, yielding a drop in precision and a gain in recall, resulting in the best F1. For Cell Phone, the approximations are close, but the original SNK has the best F1. To study the running time, we randomly select 500 sentences. Figure 6 shows the time for applying a kernel function to 250k pairs of sentences when we vary two parameters: S and L. When S varies, SNK running time has exponential behaviour, whereas the approximations show fairly linear curves. L seems to influence the computation time linearly for SNK and and its approximations. The experiments were carried out on a PC with Intel Core i5 CPU 3.2 GHz and 4Gb RAM. This experiment shows that the original SNK is still tractable for small S and L, which turn out to be the case for optimal effectiveness. If efficiency is of paramount importance, the two approximations are significantly faster, without much degradation (none in some cases) of effectiveness. 6 Related Work Exploiting comparisons in text begins with identifying comparisons within sentences. The previous state of the art for English is the baseline CSR approach (Jindal and Liu, 2006a). For scientific text, (Park and Blake, 2012) explored handcrafted syntactic rules that might not cross domains well. Comparisons are also studied in other languages, such as Chinese, Japanese, and Korean (Huang et al., 2008; Yang and Ko, 2009; Kurashima et al., 2008; Yang and Ko, 2009; Zhang and Jin, 2012). A different task seeks to identify the “components” within comparative sentences, i.e., entities, aspect, comparative predicate (Jindal and Liu, 2006b; Hou and Li, 2008; Kessler and Kuhn, 2014b; Kessler and Kuhn, 2013; Feldman et al., 2007). Others are interested in yet another task to identify the direction of the comparisons (Ganapathibhotla and Liu, 2008; Tkachenko and Lauw, 2014), or the aggregated ranking (Kurashima et al., 2008; Zhang et al., 2013; Li et al., 2011). Our task precedes these tasks in the pipeline. Other than comparison identification, dependency grammar has also found applications in natural language-related tasks, such as sentiment classification (Nakagawa et al., 2010), question answering (Punyakanok et al., 2004; Lin and Pantel, 2001), as well as relation extraction (Culotta and Sorensen, 2004; Bunescu and Mooney, 2005). (Collins and Duffy, 2001) applied convolution kernels (Haussler, 1999; Watkins, 1999) to natural language objects, which evolved into tree kernels, e.g., sub-tree (Vishwanathan and Smola, 2004), subset tree (Collins and Duffy, 2002), descendingpath kernel (Lin et al., 2014), partial tree (Moschitti, 2006a). Skip-node kernel joins the list of tree kernels applicable to dependency trees. These kernels may also apply to other types of trees, e.g., constituency trees (Zhou et al., 2007). (Croce et al., 2011; Srivastava et al., 2013) proposed to capture semantic information along with tree structure, by allowing soft label matching via lexical similarity over distributional word representation. Skip-node gives another perspective on sparsity, using structural alignment of the tree fragments with non-matching labels. As lexical similarity can be incorporated into Skip-node kernel, we consider it orthogonal and complementary. 7 Conclusion We study the effectiveness of a convolution kernel approach for the novel formulation of extracting comparisons within sentences. Our approach outperforms the baselines in identifying comparisons and comparative sentences. Skip-node kernel and its approximations are particularly effective for comparison identification, and potentially applicable to other relation extraction or naturallanguage tasks (the direction of our future work). 384 References Rakesh Agrawal and Ramakrishnan Srikant. 1995. Mining sequential patterns. In Proceedings of the International Conference on Data Engineering (ICDE), pages 3–14. Jay Ayres, Jason Flannick, Johannes Gehrke, and Tomi Yiu. 2002. Sequential pattern mining using a bitmap representation. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD), pages 429–435. Michael A. Bender and Martin Farach-Colton. 2000. The lca problem revisited. In Proceedings of the Latin American Symposium on Theoretical Informatics, LATIN ’00, pages 88–94, London, UK, UK. Springer-Verlag. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing (HLT), pages 724– 731. Danqi Chen and Christopher D Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740–750. Michael Collins and Nigel Duffy. 2001. Convolution kernels for natural language. In Advances in Neural Information Processing Systems (NIPS), pages 625– 632. Michael Collins and Nigel Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the Annual Meeting on Association for Computational Linguistics (COLING), pages 263– 270. Danilo Croce, Alessandro Moschitti, and Roberto Basili. 2011. Structured lexical similarity via convolution kernels on dependency trees. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1034–1046. Association for Computational Linguistics. Aron Culotta and Jeffrey Sorensen. 2004. Dependency tree kernels for relation extraction. In Proceedings of the Annual Meeting on Association for Computational Linguistics (COLING). Chad Cumby and Dan Roth. 2003. On kernel methods for relational learning. In Proceedings of the International Conference on Machine Learning (ICML), pages 107–114. Ronen Feldman, Moshe Fresko, Jacob Goldenberg, Oded Netzer, and Lyle Ungar. 2007. Extracting product comparisons from discussion boards. In Proceedings of the IEEE International Conference on Data Mining (ICDM), pages 469–474. Murthy Ganapathibhotla and Bing Liu. 2008. Mining opinions in comparative sentences. In Proceedings of the International Conference on Computational Linguistics (COLING), pages 241–248. David Haussler. 1999. Convolution kernels on discrete structures. Technical report, Department of Computer Science, University of California at Santa Cruz. Feng Hou and Guo-Hui Li. 2008. Mining Chinese comparative sentences by semantic role labeling. In International Conference on Machine Learning and Cybernetics, volume 5, pages 2563–2568. Xiaojiang Huang, Xiaojun Wan, Jianwu Yang, and Jianguo Xiao. 2008. Learning to identify comparative sentences in Chinese text. In Pacific Rim International Conference on Artificial Intelligence, pages 187–198. Nitin Jindal and Bing Liu. 2006a. Identifying comparative sentences in text documents. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 244–251. Nitin Jindal and Bing Liu. 2006b. Mining comparative sentences and relations. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), volume 22, pages 1331–1336. Thorsten Joachims. 1999. Advances in kernel methods. chapter Making Large-scale Support Vector Machine Learning Practical, pages 169–184. MIT Press, Cambridge, MA, USA. Wiltrud Kessler and Jonas Kuhn. 2013. Detection of product comparisons-how far does an out-of-the-box semantic role labeling system take you? In Proceedings of the Conference on Empirical Methods on Natural Language Processing (EMNLP), pages 1892–1897. Wiltrud Kessler and Jonas Kuhn. 2014a. A corpus of comparisons in product reviews. In Proceedings of the International Conference on Language Resources and Evaluation (LREC), may. Wiltrud Kessler and Jonas Kuhn. 2014b. Detecting comparative sentiment expressions – a case study in annotation design decisions. In Proceedings of Konferenz zur Verarbeitung Natrlicher Sprache (KONVENS), October. Takeshi Kurashima, Katsuji Bessho, Hiroyuki Toda, Toshio Uchiyama, and Ryoji Kataoka. 2008. Ranking entities using comparative relations. In Database and Expert Systems Applications (DEXA), pages 124–133. Si Li, Zheng-Jun Zha, Zhaoyan Ming, Meng Wang, Tat-Seng Chua, Jun Guo, and Weiran Xu. 2011. Product comparison using comparative relations. In Proceedings of the International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR), pages 1151–1152. 385 Dekang Lin and Patrick Pantel. 2001. Discovery of inference rules for question-answering. Natural Language Engineering, 7(04):343–360. Chen Lin, Timothy Miller, Alvin Kho, Steven Bethard, Dmitriy Dligach, Sameer Pradhan, and Guergana Savova. 2014. Descending-path convolution kernel for syntactic structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 81–86. Association for Computational Linguistics. Alessandro Moschitti. 2006a. Efficient convolution kernels for dependency and constituent syntactic trees. In European Conference on Machine Learning (ECML), pages 318–329. Alessandro Moschitti. 2006b. Making tree kernels practical for natural language learning. In 11th Conference of the European Chapter of the Association for Computational Linguistics, pages 113–120. Tetsuji Nakagawa, Kentaro Inui, and Sadao Kurohashi. 2010. Dependency tree-based sentiment classification using crfs with hidden variables. In Human Language Technologies: Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 786–794. Truc-Vien T Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1378–1387. Joakim Nivre. 2005. Dependency grammar and dependency parsing. MSI report, 5133(1959):1–32. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2):1–135. Dae Hoon Park and Catherine Blake. 2012. Identifying comparative claim sentences in full-text scientific articles. In Proceedings of the Workshop on Detecting Structure in Scholarly Discourse, pages 1–9. Jian Pei, Jiawei Han, Behzad Mortazavi-Asl, Helen Pinto, Qiming Chen, Umeshwar Dayal, and MeiChun Hsu. 2001. Prefixspan: Mining sequential patterns efficiently by prefix-projected pattern growth. In Proceedings of the International Conference on Data Engineering (ICDE), pages 0215– 0215. Vasin Punyakanok, Dan Roth, and Wen-tau Yih. 2004. Natural language inference via dependency tree mapping: An application to question answering. Computational Linguistics, 6(9). Shashank Srivastava, Dirk Hovy, and Eduard Hovy. 2013. A walk-based semantically enriched tree kernel over distributed word representations. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1411– 1416. Association for Computational Linguistics. Ingo Steinwart and Andreas Christmann. 2008. Support vector machines. Springer. Maksim Tkachenko and Hady W Lauw. 2014. Generative modeling of entity comparisons in text. In Proceedings of the ACM International Conference on Information and Knowledge Management (CIKM), pages 859–868. SVN Vishwanathan and Alexander Johannes Smola. 2004. Fast kernels for string and tree matching. Kernel Methods in Computational Biology, pages 113– 130. Chris Watkins. 1999. Dynamic alignment kernels. Advances in Neural Information Processing Systems (NIPS), pages 39–50. Seon Yang and Youngjoong Ko. 2009. Extracting comparative sentences from Korean text documents using comparative lexical patterns and machine learning techniques. In Proceedings of the ACL-IJCNLP Conference Short Papers, pages 153– 156. Alexander Yeh. 2000. More accurate tests for the statistical significance of result differences. In Proceedings of the Conference on Computational Linguistics (COLING), pages 947–953. Association for Computational Linguistics. Runxiang Zhang and Yaohong Jin. 2012. Identification and transformation of comparative sentences in patent Chinese-English machine translation. In International Conference on Asian Language Processing (IALP), pages 217–220. Zhu Zhang, Chenhui Guo, and Paulo Goes. 2013. Product comparison networks for competitive analysis of online word-of-mouth. ACM Transactions on Management Information Systems (TMIS), 3(4):20. GuoDong Zhou, Min Zhang, Dong Hong Ji, and Qiaoming Zhu. 2007. Tree kernel-based relation extraction with context-sensitive structured parse tree information. EMNLP-CoNLL, page 728. 386
2015
37
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 387–396, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics It Depends: Dependency Parser Comparison Using A Web-based Evaluation Tool Jinho D. Choi Emory University 400 Dowman Dr. Atlanta, GA 30322, USA [email protected] Joel Tetreault Yahoo Labs 229 West 43rd St. New York, NY 10036, USA [email protected] Amanda Stent Yahoo Labs 229 West 43rd St. New York, NY 10036, USA [email protected] Abstract The last few years have seen a surge in the number of accurate, fast, publicly available dependency parsers. At the same time, the use of dependency parsing in NLP applications has increased. It can be difficult for a non-expert to select a good “off-the-shelf” parser. We present a comparative analysis of ten leading statistical dependency parsers on a multi-genre corpus of English. For our analysis, we developed a new web-based tool that gives a convenient way of comparing dependency parser outputs. Our analysis will help practitioners choose a parser to optimize their desired speed/accuracy tradeoff, and our tool will help practitioners examine and compare parser output. 1 Introduction Dependency parsing is a valuable form of syntactic processing for NLP applications due to its transparent lexicalized representation and robustness with respect to flexible word order languages. Thanks to over a decade of research on statistical dependency parsing, many dependency parsers are now publicly available. In this paper, we report on a comparative analysis of leading statistical dependency parsers using a multi-genre corpus. Our purpose is not to introduce a new parsing algorithm but to assess the performance of existing systems across different genres of language use and to provide tools and recommendations that practitioners can use to choose a dependency parser. The contributions of this work include: • A comparison of the accuracy and speed of ten state-of-the-art dependency parsers, covering a range of approaches, on a large multigenre corpus of English. • A new web-based tool, DEPENDABLE, for side-by-side comparison and visualization of the output from multiple dependency parsers. • A detailed error analysis for these parsers using DEPENDABLE, with recommendations for parser choice for different factors. • The release of the set of dependencies used in our experiments, the test outputs from all parsers, and the parser-specific models. 2 Related Work There have been several shared tasks on dependency parsing conducted by CoNLL (Buchholz and Marsi, 2006; Nivre and others, 2007; Surdeanu and others, 2008; Hajiˇc and others, 2009), SANCL (Petrov and McDonald, 2012), SPMRL (Seddah and others, 2013), and SemEval (Oepen and others, 2014). These shared tasks have led to the public release of numerous statistical parsers. The primary metrics reported in these shared tasks are: labeled attachment score (LAS) – the percentage of predicted dependencies where the arc and the label are assigned correctly; unlabeled attachment score (UAS) – where the arc is assigned correctly; label accuracy score (LS) – where the label is assigned correctly; and exact match (EM) – the percentage of sentences whose predicted trees are entirely correct. Although shared tasks have been tremendously useful for advancing the state of the art in dependency parsing, most English evaluation has employed a single-genre corpus, the WSJ portion of the Penn Treebank (Marcus et al., 1993), so it is not immediately clear how these results gen387 BC BN MZ NW PT TC WB ALL Training 171,120 206,057 163,627 876,399 296,437 85,466 284,975 2,084,081 Development 29,962 25,274 15,422 147,958 25,206 11,467 36,351 291,640 Test 35,952 26,424 17,875 60,757 25,883 10,976 38,490 216,357 Training 10,826 10,349 6,672 34,492 21,419 8,969 12,452 105,179 Development 2,117 1,295 642 5,896 1,780 1,634 1,797 15,161 Test 2,211 1,357 780 2,327 1,869 1,366 1,787 11,697 Table 1: Distribution of data used for our experiments. The first three/last three rows show the number of tokens/trees in each genre. BC: broadcasting conversation, BN: broadcasting news, MZ: news magazine, NW: newswire, PT: pivot text, TC: telephone conversation, WB: web text, ALL: all genres combined. eralize.1 Furthermore, a detailed comparative error analysis is typically lacking. The most detailed comparison of dependency parsers to date was performed by McDonald and Nivre (2007; 2011); they analyzed accuracy as a function of sentence length, dependency distance, valency, non-projectivity, part-of-speech tags and dependency labels.2 Since then, additional analyses of dependency parsers have been performed, but either with respect to specific linguistic phenomena (e.g. (Nivre et al., 2010; Bender et al., 2011)) or to downstream tasks (e.g. (Miwa and others, 2010; Petrov et al., 2010; Yuret et al., 2013)). 3 Data 3.1 OntoNotes 5 We used the English portion of the OntoNotes 5 corpus, a large multi-lingual, multi-genre corpus annotated with syntactic structure, predicateargument structure, word senses, named entities, and coreference (Weischedel and others, 2011; Pradhan and others, 2013). We chose this corpus rather than the Penn Treebank used in most previous work because it is larger (2.9M vs. 1M tokens) and more diverse (7 vs. 1 genres). We used the standard data split used in CoNLL’12 3, but removed sentences containing only one token so as not to artificially inflate accuracy. Table 1 shows the distribution across genres of training, development, and test data. For the most strict and realistic comparison, we trained all ten parsers using automatically assigned POS tags from the tagger in ClearNLP (Choi and Palmer, 2012a), which achieved accuracies of 97.34 and 97.52 on the development and test data, respectively. We also excluded any “morphological” fea1The SANCL shared task used OntoNotes and the Web Treebanks instead for better generalization. 2A detailed error analysis of constituency parsing was performed by (Kummerfeld and others, 2012). 3conll.cemantix.org/2012/download/ids/ ture from the input, as these are often not available in non-annotated data. 3.2 Dependency Conversion OntoNotes provides annotation of constituency trees only. Several programs are available for converting constituency trees into dependency trees. Table 2 shows a comparison between three of the most widely used: the LTH (Johansson and Nugues, 2007),4, Stanford (de Marneffe and Manning, 2008),5 and ClearNLP (Choi and Palmer, 2012b)6 dependency converters. Compared to the Stanford converter, the ClearNLP converter produces a similar set of dependency labels but generates fewer unclassified dependencies (0.23% vs. 3.62%), which makes the training data less noisy. Both the LTH and ClearNLP converters produce long-distance dependencies and use function tags for the generation of dependency relations, which allows one to generate rich dependency structures including non-projective dependencies. However, only the ClearNLP converter adapted the new Treebank guidelines used in OntoNotes. It can also produce secondary dependencies (e.g. right-node raising, referent), which can be used for further analysis. We used the ClearNLP converter to produce dependencies for our experiments. LTH Stanford ClearNLP Long-distance ✓ ✓ Secondary 1 2 4 Function tags ✓ ✓ New TB format ✓ Table 2: Dependency converters. The “secondary” row shows how many types of secondary dependencies that can be produced by each converter. 4http://nlp.cs.lth.se/software 5http://nlp.stanford.edu/software 6http://www.clearnlp.com 388 Parser Approach Language License ClearNLP v2,37 Transition-based, selectional branching (Choi and McCallum, 2013) Java Apache GN138 Easy-first, dynamic oracle (Goldberg and Nivre, 2013) Python GPL v2 LTDP v2.0.39 Transition-based, beam-search + dynamic prog. (Huang et al., 2012) Python n/a Mate v3.6.110 Maximum spanning tree, 3rd-order features (Bohnet, 2010) Java GPL v2 RBG11 Tensor decomposition, randomized hill-climb (Lei et al., 2014) Java MIT Redshift12 Transition-based, non-monotonic (Honnibal et al., 2013) Cython FOSS spaCy13 Transition-based, greedy, dynamic oracle, Brown clusters Cython Dual SNN14 Transition-based, word embeddings (Chen and Manning, 2014) Java GPL v2 Turbo v2.215 Dual decomposition, 3rd-order features (Martins et al., 2013) C++ GPL v2 Yara16 Transition-based, beam-search, dynamic oracle (Rasooli and Tetreault, 2015) Java Apache Table 3: Dependency parsers used in our experiments. 4 Parsers We compared ten state of the art parsers representing a wide range of contemporary approaches to statistical dependency parsing (Table 3). We trained each parser using the training data from OntoNotes. For all parsers we trained using the automatic POS tags generated during data preprocessing, as described above. Training settings For most parsers, we used the default settings for training. For the SNN parser, following the recommendation of the developers, we used the word embeddings from (Collobert and others, 2011). Development data ClearNLP, LTDP, SNN and Yara make use of the development data (for parameter tuning). Mate and Turbo self-tune parameter settings using the training data. The others were trained using their default/“standard” parameter settings. Beam search ClearNLP, LTDP, Redshift and Yara have the option of different beam settings. The higher the beam size, the more accurate the parser usually becomes, but typically at the expense of speed. For LTDP and Redshift, we experimented with beams of 1, 8, 16 and 64 and found that the highest accuracy was achieved at beam 8.17 For ClearNLP and Yara, a beam size of 7www.clearnlp.com 8cs.bgu.ac.il/˜yoavg/software/sdparser 9acl.cs.qc.edu/˜lhuang 10code.google.com/p/mate-tools 11github.com/taolei87/RBGParser 12github.com/syllog1sm/Redshift 13honnibal.github.io/spaCy 14nlp.stanford.edu/software/nndep.shtml 15www.ark.cs.cmu.edu/TurboParser 16https://github.com/yahoo/YaraParser 17Due to memory limitations we were unable to train Redshift on a beam size greater than 8. 64 produced the best accuracy, while a beam size of 1 for LTDT, ClearNLP, and Yara produced the best speed performance. Given this trend, we also include how those three parsers perform at beam 1 in our analyses. Feature Sets RBG, Turbo and Yara have the options of different feature sets. A more complex or larger feature set has the advantage of accuracy, but often at the expense of speed. For RBG and Turbo, we use the ”Standard” setting and for Yara, we use the default (”not basic”) feature setting. Output All the parsers other than LTDP output labeled dependencies. The ClearNLP, Mate, RBG, and Turbo parsers can generate non-projective dependencies. 5 DEPENDABLE: Web-based Evaluation and Visualization Tool There are several very useful tools for evaluating the output of dependency parsers, including the venerable eval.pl18 script used in the CoNLL shared tasks, and newer Java-based tools that support visualization of and search over parse trees such as TedEval (Tsarfaty et al., 2011),19 MaltEval (Nilsson and Nivre, 2008)20 and “What’s wrong with my NLP?”.21 Recently, there is momentum towards web-based tools for annotation and visualization of NLP pipelines (Stenetorp and others, 2012). For this work, we used a new webbased tool, DEPENDABLE, developed by the first author of this paper. It requires no installation and so provides a convenient way to evaluate and compare dependency parsers. The following are key features of DEPENDABLE: 18ilk.uvt.nl/conll/software.html 19www.tsarfaty.com/unipar/ 20www.maltparser.org/malteval.html 21whatswrong.googlecode.com 389 Figure 1: Screenshot of our evaluation tool. • It reads any type of Tab Separated Value (TSV) format, including the CoNLL formats. • It computes LAS, UAS and LS for parse outputs from multiple parsers against gold (manual) parses. • It computes exact match scores for multiple parsers, and “oracle ensemble” output, the upper bound performance obtainable by combining all parser outputs. • It allows the user to exclude symbol tokens, projective trees, or non-projective trees. • It produces detailed analyses by POS tags, dependency labels, sentence lengths, and dependency distances. • It reports statistical significance values for all parse outputs (using McNemar’s test). DEPENDABLE can be also used for visualizing and comparing multiple dependency trees together (Figure 2). A key feature is that the user may select parse trees by specifying a range of accuracy scores; this enabled us to perform the error analyses in Section 6.5. DEPENDABLE allows one to filter trees by sentence length and highlights arc and label errors. The evaluation and comparison tools are publicly available at http://nlp.mathcs.emory.edu/ clearnlp/dependable. Figure 2: Screenshot of our visualization tool. 6 Results and Error Analysis In this section, we report overall parser accuracy and speed. We analyze parser accuracy by sentence length, dependency distance, nonprojectivity, POS tags and dependency labels, and genre. We report detailed manual error analyses focusing on sentences that multiple parsers parsed incorrectly.22 All analyses, other than parsing speed, were conducted using the DEPENDABLE tool.23 The full set of outputs from all parsers, as well as the trained models for each parser, available at http://amandastent. com/dependable/. We also include the greedy parsing results of ClearNLP, LTDP, and Yara in two of our analyses to better illustrate the differences between the greedy and non-greedy settings. The greedy parsing results are denoted by the subscript ‘g’. These two analyses are the overall accuracy results, presented in Section 6.1 (Table 4), and the overall speed results, presented in Section 6.2 ( (Table 5 and Figure ). All other analyses exclude the ClearNLPg, LTDPg and Yarag. 22For one sentence in the NW data, the LTDP parser failed to produce a complete parse containing all tokens, so we removed this sentence for all parsers, leaving 11,696 trees (216,313 tokens) in the test data. 23We compared the results produced by DEPENDABLE with those produced by eval07.pl, and verified that LAS, UAS, LA, and EM were the same when punctuation was included. Our tool uses a slightly different symbol set than eval07.pl: !"#$%&’()*+,-./:;<=>?@[\]ˆ ‘{|}˜ 390 With Punctuation Without Punctuation Overall Exact Match Overall Exact Match LAS UAS LS LAS UAS LS LAS UAS LS LAS UAS LS ClearNLPg 89.19 90.63 94.94 47.65 53.00 61.17 90.09 91.72 94.29 49.12 55.01 61.31 GN13 87.59 89.17 93.99 43.78 48.89 56.71 88.75 90.54 93.32 45.44 51.20 56.88 LTDPg n/a 85.75 n/a n/a 46.38 n/a n/a 87.16 n/a n/a 48.01 n/a SNN 86.42 88.15 93.54 42.98 48.53 55.87 87.63 89.59 92.70 43.96 49.83 55.91 spaCy 87.92 89.61 94.08 43.36 48.79 55.67 88.95 90.86 93.32 44.97 51.28 55.70 Yarag 85.93 87.64 92.99 42.94 47.77 54.79 87.39 89.32 92.24 44.25 49.44 54.96 ClearNLP 89.87 91.30 95.28 49.38 55.18 63.18 90.64 92.26 94.67 50.61 56.88 63.24 LTDP n/a 88.18 n/a n/a 51.62 n/a n/a 89.17 n/a n/a 53.54 n/a Mate 90.03 91.62 95.29 49.66 56.44 62.71 90.70 92.50 94.67 50.83 58.36 62.72 RBG 89.57 91.45 94.71 46.49 55.49 58.45 90.23 92.35 94.01 47.64 56.54 58.07 Redshift 89.48 91.01 95.04 49.71 55.82 62.70 90.27 92.00 94.42 50.88 57.28 62.78 Turbo 89.81 91.50 95.00 48.08 55.33 60.49 90.49 92.40 94.34 49.29 57.09 60.52 Yara 89.80 91.36 95.19 50.07 56.18 63.36 90.47 92.24 94.57 51.02 57.53 63.42 Table 4: Overall parsing accuracy. The top 6 rows and the bottom 7 rows show accuracies for greedy and non-greedy parsers, respectively. 6.1 Overall Accuracy In Table 4, we report overall accuracy for each parser. For clarity, we report results separately for greedy and non-greedy versions of the parsers. Over all the different metrics, MATE is a clear winner, though ClearNLP, RBG, Redshift, Turbo and Yara are very close in performance. Looking at only the greedy parsers, ClearNLPg shows a significant advantage over the others. We conducted a statistical significance test for the the parsers (greedy versions excluded). All LAS differences are statistically significant at p < .01 (using McNemar’s test), except for: RBG vs. Redshift, Turbo vs. Yara, Turbo vs. ClearNLP and Yara vs. ClearNLP. All UAS differences are statistically significant at p < .01 (using McNemar’s test), except for: SNN vs. LTDP, Turbo vs. Redshift, Yara vs. RBG and ClearNLP vs. Yara. 6.2 Overall Speed We ran timing experiments on a 64 core machine with 16 Intel Xeon E5620 2.40 GHz processors and 24G RAM, and used the unix time command to time each run. Some parsers are multithreaded; for these, we ran in single-thread mode (since any parser can be externally parallelized). Most parsers do not report model load time, so we first ran each parser five times with a test set of 10 sentences, and then averaged the middle three times to get the average model load time.24 Next, we ran each parser five times with the entire test set and derived the overall parse time by averaging the middle three parse times. We then subtracted the average model time from the average 24Recall we exclude single-token sentences from our tests. parse time and averaged over the number of sentences and tokens. Sent/Sec Tokens/Sec Language ClearNLPg 555 10,271 Java GN13 95 1,757 Python LTDPg 232 4,287 Python SNN 465 8,602 Java spaCy 755 13,963 Cython Yarag 532 9,838 Java ClearNLP 72 1,324 Java LTDP 26 488 Python Mate 30 550 Java RBG 57 1,056 Java Redshift 188 3,470 Cython Turbo 19 349 C++ Yara 18 340 Java Table 5: Overall parsing speed. Figure 3: Number of sentences parsed per second by each parser with respect to sentence length. Table 5 shows overall parsing speed for each parser. spaCy is the fastest greedy parser and Redshift is the fastest non-greedy parser. Figure 3 391 shows an analysis of parsing speed by sentence length in bins of length 10. As expected, as sentence length increases, parsing speed decreases remarkably. 6.3 Detailed Accuracy Analyses For the following more detailed analyses, we used all tokens (including punctuation). As mentioned earlier, we exclude ClearNLPg, LTDPg and Yarag from these analyses and instead use their respective non-greedy modes yielding higher accuracy. Sentence Length We analyzed parser accuracy by sentence length in bins of length 10 (Figure 4). As expected, all parsers perform better on shorter sentences. For sentences under length 10, UAS ranges from 93.49 to 95.5; however, UAS declines to a range of 81.66 and 86.61 for sentence lengths greater than 50. The most accurate parsers (ClearNLP, Mate, RBG, Redshift, Turbo, and Yara) separate from the remaining when sentence length is more than 20 tokens. Figure 4: UAS by sentence length. Dependency Distance We analyzed parser accuracy by dependency distance (depth from each dependent to its head; Figure 5). Accuracy falls off more slowly as dependency distance increases for the top 6 parsers vs. the rest. Projectivity Some of our parsers only produce projective parses. Table 6 shows parsing accuracy for trees containing only projective arcs (11,231 trees, 202,521 tokens) and for trees containing non-projective arcs (465 trees, 13,792 tokens). As before, all differences are statistically significant at p < .01 except for: Redshift vs. RBG for overall LAS; LTDP vs. SNN for overall UAS; and Turbo vs. SpaCy for overall UAS. For strictly projective trees, the LTDP parser is 5th from the top in UAS. Apart from this, the grouping between “very good” and “good” parsers does not change. Figure 5: UAS by dependency distance. Projective only Non-proj. only LAS UAS LAS UAS ClearNLP 90.20 91.62 85.10 86.72 GN13 88.00 89.57 81.56 83.37 LTDP n/a 90.24 n/a 57.83 Mate 90.34 91.91 85.51 87.40 RBG 89.86 91.72 84.83 86.94 Redshift 89.90 91.41 83.30 85.12 SNN 86.83 88.55 80.37 82.32 spaCy 88.31 89.99 82.15 84.08 Turbo 88.36 89.90 83.50 85.30 Yara 90.20 91.74 83.92 85.74 Table 6: Accuracy for proj. and non-proj. trees. Dependency Relations We were interested in which dependency relations were computed with high/low overall accuracy, and for which accuracy varied between parsers. The dependency relations with the highest average LAS scores (> 97%) were possessive, hyph, expl, hmod, aux, det and poss. These relations have strong lexical clues (e.g. possessive) or occur very often (e.g. det). Those with the lowest LAS scores (< 50%) were csubjpass, meta, dep, nmod and parataxis. These either occur rarely or are very general (dep). The most “confusing” dependency relations (those with the biggest range of accuracies across parsers) were csubj, preconj, csubjpass, parataxis, meta and oprd (all with a spread of > 20%). The Mate and Yara parsers each had the highest accuracy for 3 out of the top 10 “confusing” dependency relations. The RBG parser 392 had the highest accuracy for 4 out of the top 10 “most accurate” dependency relations. SNN had the lowest accuracy for 5 out of the top 10 “least accurate” dependency relations, while the RBG had the lowest accuracy for another 4. POS Tags We also examined error types by part of speech tag of the dependent. The POS tags with the highest average LAS scores (> 97%) were the highly unambiguous tags POS, WP$, MD, TO, HYPH, EX, PRP and PRP$. With the exception of WP$, these tags occur frequently. Those with the lowest average LAS scores (< 75%) were punctuation markers ((, ) and :, and the rare tags AFX, FW, NFP and LS. Genres Table 7 shows parsing accuracy for each parser for each of the seven genres comprising the English portion of OntoNotes 5. Mate and ClearNLP are responsible for the highest accuracy for some genres, although accuracy differences among the top four parsers are generally small. Accuracy is highest for PT (pivot text, the Bible) and lowest for TC (telephone conversation) and WB (web data). The web data is itself multi-genre and includes translations from Arabic and Chinese, while telephone conversation data includes disfluencies and informal language. 6.4 Oracle Ensemble Performance One popular method for achieving higher accuracy on a classification task is to use system combination (Bj¨orkelund and others, 2014; Le Roux and others, 2012; Le Roux et al., 2013; Sagae and Lavie, 2006; Sagae and Tsujii, 2010; Haffari et al., 2011). DEPENDABLE reports ensemble upper bound performance assuming that the best tree can be identified by an oracle (macro), or that the best arc can be identified by an oracle (micro). Table 8 provides an upper bound on ensemble performance for future work. LAS UAS LS Macro 94.66 96.00 97.82 Micro 96.52 97.61 98.40 Table 8: Oracle ensemble performance. The highest match was achieved between the RBG and Mate parser (62.22 UAS). ClearNLP, GN13 and LTDP all matched with Redshift the best, and RBG, Redshift and Turbo matched with Mate the best. SNN, spaCy and Turbo did not match well with other parsers; their respective ”best match” score was never higher than 55. 6.5 Error Analysis From the test data, we pulled out parses where only one parser achieved very high accuracy, and parses where only one parser had low accuracy (Table 9). As with the detailed performance analyses, we used the most accurate version of each parser for this analysis. Mate has the highest number of “generally good” parses, while the SNN parser has the highest number of “uniquely bad” parses. The SNN parser tended to choose the wrong root, but this did not appear to be tied to the number of verbs in the sentence - rather, the SNN parser just makes the earliest “reasonable” choice of root. Parser UAS ≥90 = 100 < 90 < 90 All others UAS < 90 < 90 ≥90 = 100 ClearNLP 42 11 45 15 LTDP 29 12 182 36 GN13 26 8 148 65 Mate 75 19 44 10 RBG 49 21 49 15 Redshift 38 17 28 8 SNN 70 23 417 142 spaCy 48 17 218 73 Turbo 54 15 28 14 Yara 33 15 27 7 Table 9: Differential parsing accuracies. To further analyze these results, we first looked at the parse trees for “errorful” sentences where the parsers agreed. From the test data, we extracted parses for sentences where at least two parsers got UAS of < 50%. This gave us 253 sentences. The distribution of these errors across genres varied: PT - 2.8%, MZ - 3.5%, BN - 9.8%, NW - 10.3%, WB - 17.4%, BC - 25.3%, TC - 30.8%. By manual comparison using the DEPENDABLE tool, we identified frequently occurring potential sources of error. We then manually annotated all sentences for these error types. Figure 6 shows the number of “errorful” sentences of each type. Punctuation attachment “errors” are prevalent. For genres with “noisy” text (e.g. broadcast conversation, telephone conversation) a significant proportion of errors come from fragmented sentences or those containing backchannels or disfluencies. There are also a number of sentences with what appeared to be manual dependency labeling errors in the gold annotation. 393 BC BN MZ NW PT TC WB LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS LAS UAS ClearNLP 88.95 90.36 89.59 91.01 89.56 91.24 89.79 91.08 95.88 96.68 87.17 88.93 87.93 89.83 GN13 86.75 88.40 87.38 88.87 87.31 89.10 87.36 88.84 94.06 95.00 85.68 87.60 85.20 87.19 LTDP n/a 86.81 n/a 87.43 n/a 88.87 n/a 88.40 n/a 93.52 n/a 85.85 n/a 86.37 Mate 89.03 90.73 89.30 90.82 90.09 91.92 90.28 91.68 95.71 96.64 87.86 89.87 87.86 89.89 RBG 88.64 90.58 88.99 90.86 89.28 91.45 89.85 91.47 95.27 96.41 87.36 89.65 87.12 89.61 Redshift 88.60 90.19 88.96 90.46 89.11 90.90 89.63 90.99 95.36 96.22 87.14 88.99 87.27 89.31 SNN 85.35 87.08 86.13 87.78 86.00 87.92 86.17 87.74 93.47 94.64 83.50 85.74 84.29 86.50 spaCy 87.27 89.05 87.70 89.31 87.37 89.29 88.00 89.52 94.28 95.27 85.67 87.65 85.16 87.40 Turbo 87.05 88.70 87.58 89.04 88.34 90.02 87.95 89.33 94.39 95.36 85.91 87.93 85.66 87.70 Yara 88.90 90.53 89.40 90.89 89.72 91.42 90.00 91.41 95.41 96.32 87.35 89.19 87.55 89.61 Total 2211 1357 780 2326 1869 1366 1787 Table 7: Parsing accuracy by genre. Figure 6: Common error types in erroneous trees. 6.6 Recommendations Each of the transition-based parsers that was included in this evaluation can use varying beam widths to trade off speed vs. accuracy, and each parser has numerous other parameters that can be tuned. Notwithstanding all these variables, we can make some recommendations. Figure 7 illustrates the speed vs. accuracy tradeoff across the parsers. For highest accuracy (e.g. in dialog systems), Mate, RBG, Turbo, ClearNLP and Yara are good choices. For highest speed (e.g. in web-scale NLP), spaCy and ClearNLPg are good choices; SNN and Yarag are also good choices when accuracy is relatively not as important. 7 Conclusions and Future Work In this paper we have: (a) provided a detailed comparative analysis of several state-of-the-art statistical dependency parsers, focusing on accuracy Figure 7: Speed with respect to accuracy. and speed; and (b) presented DEPENDABLE, a new web-based evaluation and visualization tool for analyzing dependency parsers. DEPENDABLE supports a wide range of useful functionalities. In the future, we plan to add regular expression search over parses, and sorting within results tables. Our hope is that the results from the evaluation as well as the tool will give non-experts in parsing better insight into which parsing tool works well under differing conditions. We also hope that the tool can be used to facilitate evaluation and be used as a teaching aid in NLP courses. Supplements to this paper include the tool, the parse outputs, the statistical models for each parser, and the new set of dependency trees for OntoNotes 5 created using the ClearNLP dependency converter. We do recommend examining one’s data and task before choosing and/or training a parser. Are non-projective parses likely or desirable? Does the data contain disfluencies, sentence fragments, and other “noisy text” phenomena? What is the average and standard deviation for sentence length and dependency length? The analyses in this paper can be used to select a parser if one has the answers to these questions. 394 In this work we did not implement an ensemble of parsers, partly because an ensemble necessarily entails complexity and/or speed delays that render it unusable by all but experts. However, our analyses indicate that it may be possible to achieve small but significant increases in accuracy of dependency parsing through ensemble methods. A good place to start would be with ClearNLP, Mate, or Redshift in combination with LTDP and Turbo, SNN or spaCy. In addition, it may be possible to achieve good performance in particular genres by doing “mini-ensembles” trained on general purpose data (e.g. WB) and genre-specific data. We leave this for future work. We also leave for future work the comparison of these parsers across languages. It remains to be seen what downstream impact differences in parsing accuracy of 2-5% have on the goal task. If the impact is small, then speed and ease of use are the criteria to optimize, and here spaCy, ClearNLPg, Yarag and SNN are good choices. Acknowledgments We would like to thank the researchers who have made available data (especially OntoNotes), parsers (especially those compared in this work), and evaluation and visualization tools. Special thanks go to Boris Abramzon, Matthew Honnibal, Tao Lei, Danqi Li and Mohammad Sadegh Rasooli for assistance in installation, trouble-shooting and general discussion. Additional thanks goes to the kind folks from the SANCL-SPMRL community for an informative discussion of evaluation and visualization tools. Finally, we would like to thank the three reviewers, as well as Martin Chodorow, Dean Foster, Joseph Le Roux and Robert Stine, for feedback on this paper. References Emily M. Bender, Dan Flickinger, Stephan Oepen, and Yi Zhang. 2011. Parser evaluation over local and non-local deep dependencies in a large corpus. In Proceedings of EMNLP. Anders Bj¨orkelund et al. 2014. Introducing the IMSWrocław-Szeged-CIS entry at the SPMRL 2014 shared task: Reranking and morpho-syntax meet unlabeled data. In Proceedings of the First Joint Workshop on Statistical Parsing of Morphologically Rich Languages and Syntactic Analysis of NonCanonical Languages. Bernd Bohnet. 2010. Very high accuracy and fast dependency parsing is not a contradiction. In Proceedings of COLING. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In Proceedings of CoNLL. Danqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of EMNLP. Jinho D. Choi and Andrew McCallum. 2013. Transition-based Dependency Parsing with Selectional Branching. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL’13, pages 1052–1062. Jinho D. Choi and Martha Palmer. 2012a. Fast and Robust Part-of-Speech Tagging Using Dynamic Model Selection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, ACL’12, pages 363–367. Jinho D. Choi and Martha Palmer. 2012b. Guidelines for the Clear Style Constituent to Dependency Conversion. Technical Report 01-12, Institute of Cognitive Science, University of Colorado Boulder, Boulder, CO, USA. Ronan Collobert et al. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The Stanford typed dependencies representation. In Proceedings of the COLING workshop on Cross-Framework and Cross-Domain Parser Evaluation. Yoav Goldberg and Joakim Nivre. 2013. Training deterministic parser with non-deterministic oracles. In Proceedings of TACL. Gholamreza Haffari, Marzieh Razavi, and Anoop Sarkar. 2011. An ensemble model that combines syntactic and semantic clustering for discriminative dependency parsing. In Proceedings of ACL-HLT. Jan Hajiˇc et al. 2009. The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of CoNLL. Matthew Honnibal, Yoav Goldberg, and Mark Johnson. 2013. A non-monotonic arc-eager transition system for dependency parsing. In Proceedings of CoNLL. Liang Huang, Suphan Fayong, and Yang Guo. 2012. Structured perceptron with inexact search. In Proceedings of the NAACL. Richard Johansson and Pierre Nugues. 2007. Extended constituent-to-dependency conversion for English. In Proceedings of NODALIDA. 395 Jonathan K. Kummerfeld et al. 2012. Parser showdown at the wall street corral: An empirical investigation of error types in parser output. In Proceedings of EMNLP. Joseph Le Roux et al. 2012. DCU-Paris13 systems for the SANCL 2012 shared task. In Proceedings of the First Workshop on Syntactic Analysis of NonCanonical Language (SANCL). Joseph Le Roux, Antoine Rozenknop, and Jennifer Foster. 2013. Combining PCFG-LA models with dual decomposition: A case study with function labels and binarization. In Proceedings of EMNLP. Tao Lei, Yu Xin, Yuan Zhang, Regina Barzilay, and Tommi Jaakkola. 2014. Low-rank tensors for scoring dependency structures. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1381–1391, Baltimore, Maryland, June. Association for Computational Linguistics. Mitchell P. Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: the Penn treebank. Computational Linguistics, 19(2):313–330. Andr´e F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order non-projective turbo parsers. In Proceedings of the ACL. Ryan McDonald and Joakim Nivre. 2007. Characterizing the errors of data-driven dependency parsing models. In Proceedings of EMNLP-CoNLL. Ryan McDonald and Joakim Nivre. 2011. Analyzing and integrating dependency parsers. Computational Linguistics, 37(1):197–230. Makoto Miwa et al. 2010. A comparative study of syntactic parsers for event extraction. In Proceedings of BioNLP. Jens Nilsson and Joakim Nivre. 2008. MaltEval: An evaluation and visualization tool for dependency parsing. In Proceedings of LREC. Joakim Nivre et al. 2007. The CoNLL 2007 shared task on dependency parsing. In Proceedings of CoNLL. Joakim Nivre, Laura Rimell, Ryan McDonald, and Carlos G´omez Rodr´ıguez. 2010. Evaluation of dependency parsers on unbounded dependencies. In Proceedings of the 23rd International Conference on Computational Linguistics (Coling 2010), pages 833–841, Beijing, China, August. Coling 2010 Organizing Committee. Stephan Oepen et al. 2014. SemEval 2014 Task 8: Broad-Coverage Semantic Dependency Parsing. In Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014), pages 63–72. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In Proceedings of the First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL) Shared Task. Slav Petrov, Pi-Chuan Chang, Michael Ringgaard, and Hiyan Alshawi. 2010. Uptraining for accurate deterministic question parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 705–713, Cambridge, MA, October. Association for Computational Linguistics. Sameer Pradhan et al. 2013. Towards robust linguistic analysis using OntoNotes. In Proceedings of CoNLL. Mohammad Sadegh Rasooli and Joel R. Tetreault. 2015. Yara parser: A fast and accurate dependency parser. CoRR, abs/1503.06733. Kenji Sagae and Alon Lavie. 2006. Parser combination by reparsing. In Proceedings HLT-NAACL. Kenji Sagae and Jun’ichi Tsujii. 2010. Dependency parsing and domain adaptation with data-driven LR models and parser ensembles. In Trends in Parsing Technology: Dependency Parsing, Domain Adaptation, and Deep Parsing, pages 57–68. Springer. Djam´e Seddah et al. 2013. Overview of the SPMRL 2013 shared task: A cross-framework evaluation of parsing morphologically rich languages. In Proceedings of the 4th Workshop on Statistical Parsing of Morphologically Rich Languages. Pontus Stenetorp et al. 2012. BRAT: A web-based tool for NLP-assisted text annotation. In Proceedings of the EACL. Mihai Surdeanu et al. 2008. The CoNLL-2008 shared task on joint parsing of syntactic and semantic dependencies. In Proceedings of CoNLL. Reut Tsarfaty, Joakim Nivre, and Evelina Andersson. 2011. Evaluating dependency parsing: Robust and heuristics-free cross-annotation evaluation. In Proceedings of EMNLP. Ralph Weischedel et al. 2011. OntoNotes: A large training corpus for enhanced processing. In Joseph Olive, Caitlin Christianson, and John McCary, editors, Handbook of Natural Language Processing and Machine Translation. Springer. Deniz Yuret, Laura Rimell, and Aydin Han. 2013. Parser evaluation using textual entailments. Language Resources and Evaluation, 47(3). 396
2015
38
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 397–407, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Generating High Quality Proposition Banks for Multilingual Semantic Role Labeling Alan Akbik∗ Technische Universit¨at Berlin, Germany [email protected] Laura Chiticariu Marina Danilevsky Yunyao Li Shivakumar Vaithyanathan Huaiyu Zhu IBM Research - Almaden 650 Harry Road, San Jose, CA 95120, USA {chiti,mdanile,yunyaoli,vaithyan,huaiyu}@us.ibm.com Abstract Semantic role labeling (SRL) is crucial to natural language understanding as it identifies the predicate-argument structure in text with semantic labels. Unfortunately, resources required to construct SRL models are expensive to obtain and simply do not exist for most languages. In this paper, we present a two-stage method to enable the construction of SRL models for resourcepoor languages by exploiting monolingual SRL and multilingual parallel data. Experimental results show that our method outperforms existing methods. We use our method to generate Proposition Banks with high to reasonable quality for 7 languages in three language families and release these resources to the research community. 1 Introduction Semantic role labeling (SRL) is the task of automatically labeling predicates and arguments in a sentence with shallow semantic labels. This level of analysis provides a more stable semantic representation across syntactically different sentences, thereby enabling a range of NLP tasks such as information extraction and question answering (Shen and Lapata, 2007; Maqsud et al., 2014). Projects such as the Proposition Bank (PropBank) (Palmer et al., 2005) spent considerable effort to annotate corpora with semantic labels, in turn enabling supervised learning of statistical SRL parsers for English. Unfor∗This work was conducted at IBM. tunately, due to the high costs of manual annotation, comparable SRL resources do not exist for most other languages, with few exceptions (Hajiˇc et al., 2009; Erk et al., 2003; Zaghouani et al., 2010; Vaidya et al., 2011). As a cost-effective alternative to manual annotation, previous work has investigated the direct projection of semantic labels from a resource rich language (English) to a resource poor target language (TL) in parallel corpora (Pado, 2007; Van der Plas et al., 2011). The underlying assumption is that original and translated sentences in parallel corpora are semantically broadly equivalent. Hence, if English sentences of a parallel corpus are automatically labeled using an SRL system, these labels can be projected onto aligned words in the TL corpus, thereby automatically labeling the TL corpus with semantic labels. This way, PropBank-like resources can automatically be created that enable the training of statistical SRL systems for new TLs. However, as noted in previous work (Pado, 2007; Van der Plas et al., 2011), aligned sentences in parallel corpora often exibit issues such as translation We need to hold people responsible A0 need.01 A1 A0 hold.01 A1 A2 Il faut qu' il y des ait responsables need.01 A1 it needs exist those responsible that there exist.01 A1 need.01 A1 TL SL Figure 1: Pair of parallel sentences from Frenchgoldwith word alignments (dotted lines), SRL labels for the English sentence, and gold SRL labels for the French sentence. Only two of the seven English SRL labels should be projected here. 397 Figure 2: Overview of the proposed two-stage approach for projecting English (EN) semantic role labels onto a TL corpus. shifts that go against this assumption. For example, in Fig. 1, the English sentence “We need to hold people responsible” is translated into a French sentence that literally reads as “There need to exist those responsible”. Hence, the predicate label of the English word “hold” should not be projected onto the French verb, which has a different meaning. As the example in Fig. 1 shows, this means that only a subset of all SL labels can be directly projected. In this paper, we aim to create PropBank-like resources for a range of languages from different language groups. To this end, we propose a two-stage approach to cross-lingual semantic labeling that addresses such errors, shown in Fig. 2: Given a parallel corpus in which the source language (SL) side is automatically labeled with PropBank labels and the TL side is syntactically parsed, we use a filtered projection approach that allows the projection only of high-confidence SL labels. This results in a TL corpus with low recall but high precision. In the second stage, we repeatedly sample a subset of complete TL sentences and train a classifier to iteratively add new labels, significantly increasing the recall in the TL corpus while retaining the improvement in precision. Our contributions are: (1) We propose filtered projection focused specifically on raising the precision of projected labels, based on a detailed analysis of direct projection errors. (2) We propose a bootstrap learning approach to retrain the SRL to iteratively improve recall without a significant reduction of precision, especially for arguments; (3) We demonstrate the effectiveness and generalizability of our approach via an extensive set of experiments over 7 different language pairs. (4) We generate PropBanks for each of these languages and release them to the research community.1 2 Stage 1: Filtered Annotation Projection Stage 1 of our approach (Fig. 2) is designed to create a TL corpus with high precision semantic labels. Direct Projection The idea of direct annotation projection (Van der Plas et al., 2011) is to transfer semantic labels from SL sentences to TL sentences according to word alignments. Formally, for each pair of sentences sSL and sTL in the parallel corpus, the word alignment produces alignment pairs (wSL,i, wTL,i′), where wSL,i and wTL,i′ are words from sSL and sTL respectively. Under direct projection, if lSL,i is a predicate label for wSL,i and (wSL,i, wTL,i′) is an alignment pair, then lSL,i is transferred to wTL,i′; If lSL,j is a predicate-argument label for (wSL,i, wSL,j), and (wSL,i, wTL,i′) and (wSL,j, wTL,j′) are alignment pairs, then lSL,j is transferred to (wTL,i′, wTL,j′), as illustrated below. Filtered Projection As discussed earlier, direct projection is vulnerable to errors stemming from issues such as translation shifts. We propose filtered projection focused specifically on improving the precision of projected labels. Specifically, for a pair of sentences sSL and sTL in the parallel corpus, we retain the semantic label lSL,i projected from wSL,i onto wTL,i′ if and only if it satisfies the filtering policies. This results in a target corpus containing fewer labels but of higher precision compared to that obtained via direct projection. In the rest of the section, we analyze typical errors in direct projection (Sec. 2.2), present a set of filters to handle such errors (Sec. 2.3), and experimentally evaluate their effectiveness (Sec. 2.4). 1The resources are available on request. 398 ERROR CLASS NUMBER Translation Shift: Predicate Mismatch 37 Translation Shift: Verb→Non-verb 36 No English Equivalent 8 Gold Data Errors 6 SRL Errors 5 Verb (near-)Synonyms 4 Light Verb Construction 3 Alignment Errors 1 Total 100 Table 1: Breakdown of error classes in predicate projection. 2.1 Experimental Setup Data For experiments in this section and Sec. 3, we used the gold data set compiled by (Van der Plas et al., 2011), referred to as Frenchgold. It consists of 1,000 sentence-pairs from the English-French Europarl corpus (Koehn, 2005) with French sentences manually labeled with predicate and argument labels from the English Propbank. Evaluation In line with previous work (Van der Plas et al., 2010), we count synonymous predicate labels sharing the same VERBNET (Schuler, 2005) class as true positives.2 In addition, we exclude modal verbs from the evaluation due to inconsistent annotation. Source Language SRL Throughout the rest of the paper, we use CLEARNLP (Choi and McCallum, 2013), a state-of-the-art SRL system, to produce semantic labels for English text. 2.2 Error Analysis We observe that direct projection labels have both low precision and low recall (see Tab. 3 (Direct)). Analysis of False Negatives The low recall of direct projection is not surprising; most semantic labels in the French sentences do not appear in the corresponding English sentences at all. Specifically, among 1,741 predicate labels in the French sentences, only 778 exist in the corresponding English sentences, imposing a 45% upper bound on the recall for projected predicates. Similarly, of the 5,061 argument labels in the French sentences, only 1,757 exist in the corresponding English sentences, resulting in a 35% upper bound on recall for arguments.3 2For instance, the French verb sembler may be correctly labeled as either of the synonyms: seem.01 or appear.02. 3This upper bound is different from the one reported in (Van der Plas et al., 2011) which corresponds to the interannotator agreement over manual annotation of 100 sentences. ERROR CLASS NUMBER Non-Argument Head 33 SRL Errors 31 No English Equivalent 12 Gold Data Errors 11 Translation Shift: Argument Function 6 Parsing Errors 4 Alignment Errors 3 Total 100 Table 2: Breakdown of error classes in argument projection. Analysis of False Positives While the recall produced by direct projection is close to the theoretical upper bound, the precision is far from the theoretical upper bound of 100%. To understand causes of false positives, we examine a random sample of 200 false positives, of which 100 are incorrect predicate labels, and 100 are incorrect argument labels belonging to correctly projected predicates. Tab. 1 and 2 show the detailed breakdown of errors for predicates and arguments, respectively. We first analyze the most common types of errors and discuss the residual errors later in Sec. 2.5. • Translation Shift: Predicate Mismatch The most common predicate errors (37%) are translation shifts in which an English predicate is aligned to a French verb with a different meaning. Fig. 1 illustrates such a translation shift: label hold.01 of English verb hold is wrongly projected onto the French verb ait, which is labeled as exist.01 in Frenchgold. • Translation Shift: Verb→Non-Verb is another common predicate error (36%). English verbs may be aligned with TL words other than verbs, which is often indicative of translation shifts. For instance, in the following sentence pair sSL We know what happened sFR On connait la suite We know the result the English verb happen is aligned to the French noun suite (result), causing it to be wrongly projected with the English predicate label happen.01. • Non-Argument Head The most common argument error (33%) is caused by the projection of argument labels onto words other than the syntactic head of a target verb’s argument. For example, in Fig. 1 the label A1 on the English hold is wrongly transferred to the French ait, which is not the syntactic head of the complement. 399 2.3 Filters We consider the following filters to remove the most common types of false positives. Verb Filter (VF) targets Verb→Non-Verb translation shift errors (Van der Plas et al., 2011). Formally, if direct projection transfers predicate label lSL,i from wSL,i onto wTL,i′, retain lSL,i only if both wSL,i and wTL,i′ are verbs. Translation Filter (TF) handles both Predicate Mismatch and Verb→Non-Verb translation shift errors. It makes use of a translation dictionary and allows projection only if the TL verb is a valid translation of the SL verb. In addition, in order to ensure consistent predicate labels throughout the TL corpus, if a SL verb has several possible synonymous translations, it allows projection only for the most commonly observed translation. Formally, for an aligned pair (wSL,i, wTL,i′) where wSL,i has predicate label lSL,i, if (wSL,i, wTL,i′) is not a verb to verb translation from SL to TL, assign no label to wTL,i′. Otherwise, split the set of SL translations of wTL,i′ into synonym sets S1, S2, . . . ; For each k, let W k be the subset of Sk most commonly aligned with wTL,i′; If wSL,i is in one of these W k, assign label lSL,i to wTL,i′; Otherwise assign no label to wTL,i′. Reattachment Heuristic (RH) targets nonargument head errors that occur if a TL argument is not the direct child of a verb in the dependency parse tree of its sentence.4 Assume direct projection transfers the predicate-argument label lSL,j from (wSL,i, wSL,j) onto (wTL,i′, wTL,j′). Find the immediate ancestor verb of wTL,j′ in the dependency parse tree. Denote as wTL,k its child that is an ancestor of wTL,j′. Assign the label lSL,j to (wTL,i′, wTL,k) instead of (wTL,i′, wTL,j′). An illustration is below: RH ensures that labels are always attached to the syntactic heads of their respective arguments, as de4In (Pad´o and Lapata, 2009), a similar filtering method is defined over constituent-based trees to reduce the set of viable nodes for argument labels to all nodes that are not a child of some ancestor of the predicate. PREDICATE ARGUMENT PROJECTION P R F1 P R F1 Direct 0.45 0.4 0.43 0.43 0.31 0.36 VF 0.59 0.4 0.48 0.53 0.31 0.39 TF 0.88 0.36 0.51 0.58 0.17 0.27 VF+RH 0.59 0.4 0.48 0.68 0.35 0.46 TF+RH 0.88 0.36 0.51 0.75 0.2 0.31 Upper Bound 1 0.45 0.62 1 0.35 0.51 Table 3: Quality of predicate and argument labels for different projection methods on Frenchgold, including upper bound. termined by the dependency tree. An example of such reattachment is illustrated in Fig. 1 (curved arrow on TL sentence). 2.4 Filter Effectiveness We now present an initial validation on the effectiveness of the aforementioned filters by evaluating their contribution to annotation projection quality for Frenchgold, as summarized in Tab. 3. VF reduces the number of wrongly projected predicate labels, resulting in an increase of predicate precision to 59% (↑14 pp), without impact to recall. As a side effect, argument precision also increases to 53% (↑10 pp), since, if a predicate label cannot be projected, none of its arguments can be projected. TF5 reduces the number of wrongly projected predicate labels even more significantly, increasing predicate precision to 88% (↑43 pp), at a small cost to recall. Again, argument precision increases as a side effect. However, as expected, argument recall decreases significantly (↓14 pp, to 17%), as many arguments can no longer be projected. RH targets argument labels directly (unlike VF and TF), significantly increasing argument precision and slightly increasing argument recall. In summary, initial experiments confirm that our proposed filters are effective in improving precision of projected labels at a small cost in recall. In fact, TF+RH results in nearly 100% improvement in predicate and argument labels precision with a much smaller drop in recall. 2.5 Residual Errors Filtered projection removes the most common errors discussed in Sec. 2.2. Most of the remaining errors 5In all experiments in this paper, we derived the translation dictionaries from the WIKTIONARY project and used VERBNET and WORDNET to find SL synonym groups. 400 come from the following sources. SRL Errors The most common residual errors in the remaining projected labels, especially for argument labels, are caused by mistakes made by the English SRL system. Any wrong label it assigns to an English sentence may be projected onto the TL sentence, resulting in false positives. No English Equivalent A small number of errors occur due to French particularities that do not exist in English. Such errors include certain French verbs for which no appropriate English PropBank labels exists, and French-specific syntactic particularities.6 Gold Data Errors Our evaluation so far relies on Frenchgold as ground truth. Unfortunately, Frenchgold does contain a small number of errors (e.g. missing argument labels). As a result, some correctly projected labels are being mistaken as false positives, causing a drop in both precision and recall. We therefore expect the true precision and recall of the approach to be somewhat higher than the estimate based on Frenchgold. 3 Stage 2: Bootstrapped Training of SRL As discussed earlier, the TL corpus generated via filtered projection suffers from low recall. We address this issue with the second stage of our method. Relabeling The idea of relabeling (Van der Plas et al., 2011) is to first train an SRL system over a TL corpus labeled using direct projection (with VF filter) and then use this SRL to relabel the corpus, effectively overwriting the projected labels with potentially less noisy predicted labels. We first present an analysis on relabeling in concert with our proposed filters (Sec. 3.1), which motivates our bootstrap algorithm (Sec. 3.2). 3.1 Analysis of Relabeling Approach We use the same experimental setup as in Sec. 2, and produce a labeled French corpus for each filtered annotation method. We then train an off-the-shelf SRL system (Bj¨orkelund et al., 2009) on each generated corpus and use it to relabel the corpus. We measure precision and recall of each resulting TL corpus against Frenchgold (see Tab. 4). Across all 6French negations, for instance, are split into a particle and a connegative. In the annotation scheme used in Frenchgold, particles and connegatives are labeled differently. PROJECTION PREDICATE ARGUMENT SRL training P R F1 P R F1 DIRECT – 0.45 0.40 0.43 0.43 0.31 0.36 relabel (SP) 0.49 0.57 0.53 0.52 0.43 0.47 relabel (OW) 0.66 0.60 0.63 0.71 0.37 0.49 VERB FILTER (VF) – 0.59 0.40 0.48 0.53 0.31 0.39 relabel (SP) 0.57 0.55 0.56 0.61 0.42 0.50 relabel (OW) 0.56 0.55 0.56 0.69 0.31 0.43 (Van der Plas et al., 2011) PROPOSED (TF+RH) – 0.88 0.36 0.51 0.75 0.20 0.31 relabelfull data(SP) 0.83 0.58 0.68 0.75 0.41 0.53 relabelfull data(OW) 0.78 0.51 0.62 0.73 0.35 0.47 relabelcomp. sent.(SP) 0.80 0.64 0.71 0.68 0.48 0.56 relabelcomp. sent.(OW) 0.62 0.60 0.61 0.55 0.40 0.47 bootstrap (iter. 3) 0.78 0.68 0.73 0.71 0.55 0.62 bootstrap (terminate)0.77 0.70 0.73 0.64 0.60 0.62 Table 4: Experiments on Frenchgold, with different projection and SRL training methods. SP=Supplement; OW=Overwrite. experiments, relabeling consistently improves recall over projection. The results also show how different factors affect the performance of relabeling. Supplement vs. Overwrite Projected Labels The labels produced by the trained SRL can be used to either overwrite projected labels as in (Van der Plas et al., 2011), or to supplement them (supplying labels only for words w/o projected labels). Whether to overwrite or supplement depends on whether labels produced by the trained SRL are of higher quality than the projected labels. We find that while predicted labels are of higher precision than directly projected labels, they are of lower precision than labels post filtered projection. Therefore, for filtered projection, it makes more sense to allow predicted labels to only supplement projected labels. Impact of Sampling Method We are further interested in learning the impact of sampling the data on the quality of relabeling. For the best filter found earlier (TF+RH), we compare SRL trained on the entire data set (full data) with SRL trained only on the subset of completely annotated sentences (comp. sent.), where completeness is defined as: Definition 1. A direct component of a labeled sentence sTL is either a verb in sTL or a syntactic dependent of a verb. Then sTL is k-complete if sTL contains equal to or fewer than k unlabeled direct compo401 Algorithm 1 Bootstrap learning algorithm Require: Corpus CTL with initial set of labels LTL, and resampling threshold function k(i); for i = 1 to ∞do Let ki = k(i); Let CTL comp = {w ∈CTL : w ∈sTL, sTLis ki-complete}; Let LTL comp be subset of LTL appearing on CTL comp; Train an SRL on (CTL comp, LTL comp); Use the SRL to produce label set LTL new on CTL; Let CTL no.lab = {w ∈CTL : w not labelled by LTL}; Let LTL suppl be subset of LTL new appearing on CTL no.lab; if LTL suppl = ∅then Return the SRL; end if Let LTL = LTL ∪LTL suppl; end for nents. 0-complete is abbreviated as complete. We observe that for TF+RH, when new labels supplement projected labels, relabeling over complete sentences results in better recall at slightly reduced precision, while including incomplete sentences into the training data reduces recall, but improves precision. While this finding may seem counterintuitive, it can be explained by how statistical SRL works. A densely labeled training data (such as comp. sent.) usually results in an SRL that generates densely labeled sentences, resulting in better recall but poorer precision. On the other hand, training data that is sparsely labeled results in an SRL that weighs the option of not assigning a label with higher probability, resulting in better precision and poorer recall. In short, one can control the tradeoff between precision and recall of SRL output by manipulating the completeness of the training data. 3.2 Bootstrap Learning Building on the observation that we can sample data in such a way as to either favor precision or recall, we propose a bootstrapping algorithm to train an SRL iteratively over k-complete subsets of the data which are supplemented by high precision labels produced from previous iteration. The detailed algorithm is depicted in Algorithm 1. Resampling Threshold Our goal is to use bootstrap learning to improve recall without sacrificing too much precision. Proposition 1. Under any resampling threshold, the set of labels LTL increases monotonically in each iteration of Algorithm 1. Figure 3: Values at each bootstrap iteration. Since Prop. 1 guarantees the increase of the set of labels, we need to select a resampling function to favor precision while improving recall. Specifically, we use the formula k(i) = max(k0 −i, 0), where k0 is sufficiently large. Since the precision of labels generated by the SRL is lower than the precision of labels obtained from filtered projection, the precision of the training data is expected to decrease with the increase in recall. Therefore, starting with a high k seeks to ensure high precision labels are added to the training data in the first iterations. Decreasing k in each iteration seeks to ensure that resampling is done in an increasingly restrictive way to ensure that only high-quality annotated sentences are added to the training data, thus maintaining a high confidence in the learned SRL model. 3.3 Effectiveness of Bootstrapping We experimentally evaluate the effectiveness of our model with k0 = 9.7 As shown in Tab 4, bootstrapping outperforms relabeling, producing labels with best overall quality in terms of F1 measure and recall for both predicates and arguments, with a relatively small cost in precision. While Algorithm 1 guarantees the increase of recall (Prop. 1), it provides no such guarantee on precision. Therefore, it is important to experimentally decide an early termination cutoff before the SRL gets overtrained. To do so, we evaluated the performance of the bootstrapping algorithm at each iteration (Fig. 3). We observe that for the first 3 iterations, F1-measure for both predicates and arguments rises due to large increase in recall which offsets the smaller drop in precision. Then F1measure remains stable, with recall rising and pre7We found that setting k0 to larger values had little impact on the final results . 402 LANGUAGE DEP. PARSER DATA SET #SENTENCE Arabic STANFORD UN 481K Chinese MATE-G UN 2,986K French MATE-T UN 2,542K German MATE-T Europarl 560K Hindi MALT Hindencorp 54K Russian MALT UN 2,638K Spanish MATE-G UN 2,304K Table 5: Experimental setup . Dependency parsers: STANFORD: (Green and Manning, 2010), MATE-G: (Bohnet, 2010), MATE-T: (Bohnet and Nivre, 2012), MALT: (Nivre et al., 2006). Parallel corpora: UN: (Rafalovitch et al., 2009), Europarl: (Koehn, 2005), Hindencorp: (Bojar et al., 2014). Word alignment: The UN corpus is already word-aligned. For others, we use the Berkeley Aligner (DeNero and Liang, 2007). cision falling slightly at each iteration until convergence. To optimize precision and avoid overtraining, we set an iteration cutoff of 3. This combination of TF+RH filters, bootstrapping with k0 = 9 and an iteration cutoff of 3 is used in the rest of our evaluation (Sec. 4), denoted as FBbest . 4 Multilingual Experiments We use our method to generate Proposition Banks for 7 languages and evaluate the generated resources. We seek to answer the following questions: (1) What is the estimated quality for the generated PropBanks? How well does the approach work without language-specific adaptation? (2) Are there notable differences in quality from language to language; if so, why? We also present initial investigations on how different factors affect the performance of our method. 4.1 Experimental Setup Data Tab. 5 lists the 7 different TLs and resources used in our experiments.8 We chose these TLs because (1) they are among top 10 most influential languages in the world (Weber, 1997); and (2) we could find language experts to evaluate the results. English is used as SL in all our experiments. Approach Tested For each TL, we used FBbest (Sec. 3.3) to generate a corpus with semantic labels. From each TL corpus, we extracted all complete sentences to form the generated PropBanks. 8From each parallel corpus, we only keep sentences that are considered well-formed based on a set of standard heuristics. For example, we require a well-formed sentence to end in punctuation and not to contain certain special characters. For Arabic, as the dependency parser we use has relatively poor parsing accuracy, we additionally require sentences to be shorter than 100 characters. PREDICATE ARGUMENT LANG. Match P R F1 P R F1 Agr κ Arabic part. 0.97 0.89 0.93 0.86 0.69 0.77 0.92 0.87 exact 0.97 0.89 0.93 0.67 0.63 0.65 0.85 0.77 Chinese part. 0.97 0.88 0.92 0.93 0.83 0.88 0.95 0.91 exact 0.97 0.88 0.92 0.83 0.81 0.82 0.92 0.86 French part. 0.95 0.92 0.94 0.92 0.76 0.83 0.97 0.95 exact 0.95 0.92 0.94 0.86 0.74 0.8 0.95 0.91 German part. 0.96 0.92 0.94 0.95 0.73 0.83 0.95 0.91 exact 0.96 0.92 0.94 0.91 0.73 0.81 0.92 0.86 Hindi part. 0.91 0.68 0.78 0.93 0.66 0.77 0.94 0.88 exact 0.91 0.68 0.78 0.58 0.54 0.56 0.81 0.69 Russian part. 0.96 0.94 0.95 0.91 0.68 0.78 0.97 0.94 exact 0.96 0.94 0.95 0.79 0.65 0.72 0.93 0.89 Spanish part. 0.96 0.93 0.95 0.85 0.74 0.79 0.91 0.85 exact 0.96 0.93 0.95 0.75 0.72 0.74 0.85 0.77 Table 6: Estimated precision and recall over seven languages. Manual Evaluation While a gold annotated corpus for French (Frenchgold) was available for our experiments in the previous Sections, no such resources existed for the other TLs we wished to evaluate. We therefore chose to conduct a manual evaluation for each TL, each executed identically: For each TL we randomly selected 100 complete sentences with their generated semantic labels and assigned them to two language experts who were instructed to evaluate the semantic labels (based on their English descriptions) for the predicates and their core arguments. For each label, they were asked to determine (1) whether the label is correct; (2) if yes, then whether the boundary of the labeled constituent is correct: If also yes, mark the label as fully correct, otherwise as partially correct. Metrics We used the standard measures of precision, recall, and F1 to measure the performance of the SRLs, with the following two schemes: (1) Exact: Only fully correct labels are considered as true positives; (2) Partial: Both fully and partially correct matches are considered as true positives.9 4.2 Experimental Results Tab. 6 summarizes the estimated quality of semantic labels generated by our method for all seven TL. As can be seen, our method performed well for all 9Note that since the manually evaluated semantic labels are only a small fraction of the labels generated, the performance numbers obtained from manual evaluation is only an estimate of the actual quality for the generated resources.Thus the numbers obtained based on manual evaluation cannot be directly compared against the numbers computed over Frenchgold. 403 PROPBANK #COMPLETE %COMPLETE #VERBS Arabic 68.512 14% 330 Chinese 419,140 14% 1,102 French 248.256 10% 1145 German 44.007 8% 537 Hindi 1.623 3% 59 Russian 496.033 19% 1.349 Spanish 165.582 7% 909 Table 7: Characteristics of the generated PropBanks. seven languages and generated high quality semantics labels across the board. For predicate labels, the precision is over 95% and the recall is over 85% for all languages except for Hindi. For argument labels, when considering partially correct matches, the precision is at least 85% (above 90% for most languages) and the recall is between 66% to 83% for all the languages. These encouraging results obtained from a diverse set of languages implies the generalizability of our method. In addition, the inter-annotator agreement is very high for all the languages, indicating that the results obtained based on manual evaluation are very reliable. In addition, we make a number of interesting observations: Dependency Parsing Accuracy The precision for exact argument labels is significantly below partial matches, particularly for Hindi (↓35 pp) and Arabic (↓19 pp). Since argument boundaries are determined syntactically, such errors are caused by dependency parsing. The fact that Hindi and Arbic suffer the most from this issue is consistent with the poorer performance of their dependency parsers compared to other languages (Nivre et al., 2006; Green and Manning, 2010). Hindi as the Main Outlier The results for Hindi are much worse than the results for other languages. Besides the poorer dependency parser performance, the size of the parallel corpus used could be a factor: Hindencorp is one to two orders of magnitude smaller than the other corpora. The quality of the parallel corpus could be a reason as well: Hindencorp was collected from various sources, while both UN and Europarl were extracted from governmental proceedings. Language-specific Errors Certain errors occur more frequently in some languages than others. An example are deverbal nouns in Chinese (Xue, 2006) in formal passive constructions with support verb 受. Since we currently only consider verbs for predPREDICATE ARGUMENT SAMPLE SIZE P R F1 P R F1 100% 0.87 0.81 0.84 0.86 0.74 0.8 10% 0.88 0.8 0.84 0.87 0.72 0.79 1% 0.9 0.76 0.83 0.89 0.67 0.76 Table 8: Estimated impact of downsampling parallel corpus. PREDICATE ARGUMENT HEURISTIC P R F1 P R F1 none∗ 0.87 0.81 0.84 0.86 0.74 0.8 none∗∗ 0.88 0.8 0.84 0.76 0.65 0.7 customization∗0.87 0.81 0.84 0.9 0.74 0.81 Table 9: Impact of English SRLs (∗=CLEARNLP, ∗∗=MATESRL) and language-spec. customization (filter synt. expletive). icate labels, predicate labels are projected onto the support verbs instead of the deverbal nouns. Such errors appear for light verb constructions in all languages, but particularly affect Chinese due to the high frequency of this passive construction in the UN corpus. Low Fraction of Complete Sentences As Tab. 7 shows, the fraction of complete sentences in the generated PropBanks is rather low, indicating the impact of moderate recall on the size of generated PropBanks. Especially for languages for which only small parallel corpora are available, such as Hindi, this points to the need to address recall issues in future work. 4.3 Additional Experiments The observations made in Sec. 4.2 suggests a few factors that may potentially affect the performance of our method. To better understand their impact, we conducted the following initial investigation. SRL models produced in this set of experiments were evaluated using Frenchgold, sampled and evaluated in the same way as other experiments in this section for comparability. Data Size We varied the data size for French by downsampling the UN corpus. As one can see from Tab. 8, downsampling the dataset by one order of magnitude (to 250k sentences) only slightly affects precision, while downsampling to 25k sentences has a more pronounced but still small impact on recall. It appears that data size does not have significant impact on the performance of our method. Language-specific Customizations While our method is language-agnostic, intuitively languagespecific customization can be helpful in address404 ing language-specific errors. As an initial experiment, we added a simple heuristic to filter out French verbs that are commonly used for “existential there” constructions, as one type of common errors for French involves the syntactic expletive il (Danlos, 2005) in “existential there” constructions such as il faut (see Fig. 1 (TL sentence) for an example) wrongly labeled with with role information. As shown in Tab. 9, this simple customization results in a small increase in precision, suggesting that language-specific customization can be helpful. Quality of English SRL As noted in Sec. 2.5, errors made by English SRL are often prorogated to the TL via projection. To assess the impact of English SRL quality, we used two different English SRL systems: CLEARNLP and MATE-SRL. As can be seen from Tab. 9, the impact of English SRL quality is substantial on argument labeling. 4.4 Multilingual PropBanks To facilitate future research on multilingual SRL, we release the created PropBanks for all 7 languages to the research community to encourage further research. Tab. 7 gives an overview over the resources. 5 Related Work Annotation Projection in Parallel Corpora to train monolingual tools for new languages was introduced in the context of learning a PoS tagger (Yarowsky et al., 2001). Similar in spirit to our approach of using filters to increase the precision of projected labels, recent work (T¨ackstr¨om et al., 2013) uses token and type constraints to guide learning in cross-lingual PoS tagging. Projection of Semantic Labels was considered for FrameNet (Baker et al., 1998) in (Pad´o and Lapata, 2009; Basili et al., 2009). Recently, however, most work in the area focuses on PropBank, which has been identified as a more suitable annotation scheme for joint syntactic-semantics settings due to broader coverage (Merlo and van der Plas, 2009), and was shown to be usable for languages other than English (Monachesi et al., 2007). Direct projection of PropBank annotations was considered in (Van der Plas et al., 2011). Our approach significantly outperforms theirs in terms of recall and F1 for both predicates and arguments (Section 3). A approach was proposed in (Van der Plas et al., 2014) in which information is aggregated at the corpus level, resulting in a significantly better SRL corpus for French. However, this approach has several practical limitations: (1) it does not consider the problem of argument identification of SRL systems, treating arguments as already given; (2) it generates rules for the argument classification step preferably from manually annotated data; (3) it has been demonstrated for a single language (French), and was not applied to any other language. In contrast, our approach trains an SRL system for both predicate and argument labels, in a completely automatic fashion. Furthermore, we have applied our approach to generate PropBanks for 7 languages and conducted experiments that indicate a high F1 measure for all languages (Section 4). Other Related Work A number of approaches such as model transfer (Kozhevnikov and Titov, 2013) and role induction (Titov and Klementiev, 2012) exist for the argument classification step in the SRL pipeline. In contrast, our work addresses the full SRL pipeline and seeks to generate SRL resources for TLs with English PropBank labels. 6 Conclusion We proposed a two-staged method to construct multilingual SRL resources using monolingual SRL and parallel data and showed that our method outperforms previous approaches in both precision and recall. More importantly, through comprehensive experiments over seven languages from three language families, we show that our proposed method works well across different languages without any language specific customization. Preliminary results from additional experiments indicate that better English SRL and language-specific customization can further improve the results, which we aim to investigate in future work. A qualitative comparison against existing or under-construction PropBanks for Chinese (Xue, 2008), Hindi (Vaidya et al., 2011) or Arabic (Zaghouani et al., 2010) may be interesting, both for comparison of resources and for defining language-specific customizations. In addition, we plan to expand our experiments both to more languages as well as NomBank (Meyers et al., 2004)-style noun labels. 405 References [Baker et al.1998] Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The berkeley framenet project. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational LinguisticsVolume 1, pages 86–90. Association for Computational Linguistics. [Basili et al.2009] Roberto Basili, Diego De Cao, Danilo Croce, Bonaventura Coppola, and Alessandro Moschitti. 2009. Cross-language frame semantics transfer in bilingual corpora. In Computational Linguistics and Intelligent Text Processing, pages 332–345. Springer. [Bj¨orkelund et al.2009] Anders Bj¨orkelund, Love Hafdell, and Pierre Nugues. 2009. Multilingual semantic role labeling. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 43–48. Association for Computational Linguistics. [Bohnet and Nivre2012] Bernd Bohnet and Joakim Nivre. 2012. A transition-based system for joint part-of-speech tagging and labeled non-projective dependency parsing. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1455–1465. Association for Computational Linguistics. [Bohnet2010] Bernd Bohnet. 2010. Very high accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 89–97. Association for Computational Linguistics. [Bojar et al.2014] Ondˇrej Bojar, Vojtˇech Diatka, Pavel Rychl`y, Pavel Straˇn´ak, V´ıt Suchomel, Aleˇs Tamchyna, Daniel Zeman, et al. 2014. Hindencorp–hindi-english and hindi-only corpus for machine translation. In Proceedings of the Ninth International Conference on Language Resources and Evaluation. [Choi and McCallum2013] Jinho D. Choi and Andrew McCallum. 2013. Transition-based dependency parsing with selectional branching. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. [Danlos2005] Laurence Danlos. 2005. Automatic recognition of french expletive pronoun occurrences. In Natural language processing. Proceedings of the 2nd International Joint Conference on Natural Language Processing (IJCNLP-05), pages 73–78. Citeseer. [DeNero and Liang2007] John DeNero and Percy Liang. 2007. The Berkeley Aligner. http://code. google.com/p/berkeleyaligner/. [Erk et al.2003] K. Erk, A. Kowalski, S. Pado, and S. Pinkal. 2003. Towards a resource for lexical semantics: A large german corpus with extensive semantic annotation. In ACL. [Green and Manning2010] Spence Green and Christopher D Manning. 2010. Better arabic parsing: Baselines, evaluations, and analysis. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 394–402. Association for Computational Linguistics. [Hajiˇc et al.2009] Jan Hajiˇc, Massimiliano Ciaramita, Richard Johansson, Daisuke Kawahara, Maria Ant`onia Mart´ı, Llu´ıs M`arquez, Adam Meyers, Joakim Nivre, Sebastian Pad´o, Jan ˇStˇep´anek, et al. 2009. The conll-2009 shared task: Syntactic and semantic dependencies in multiple languages. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–18. Association for Computational Linguistics. [Koehn2005] Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In MT summit, volume 5, pages 79–86. [Kozhevnikov and Titov2013] Mikhail Kozhevnikov and Ivan Titov. 2013. Cross-lingual transfer of semantic role labeling models. In ACL (1), pages 1190–1200. [Maqsud et al.2014] Umar Maqsud, Sebastian Arnold, Michael H¨ulfenhaus, and Alan Akbik. 2014. Nerdle: Topic-specific question answering using wikia seeds. In Lamia Tounsi and Rafal Rak, editors, COLING 2014, 25th International Conference on Computational Linguistics, Proceedings of the Conference System Demonstrations, August 23-29, 2014, Dublin, Ireland, pages 81–85. ACL. [Merlo and van der Plas2009] Paola Merlo and Lonneke van der Plas. 2009. Abstraction and generalisation in semantic role labels: Propbank, verbnet or both? In ACL 2009, pages 288–296. [Meyers et al.2004] Adam Meyers, Ruth Reeves, Catherine Macleod, Rachel Szekely, Veronika Zielinska, Brian Young, and Ralph Grishman. 2004. Annotating noun argument structure for nombank. In LREC, volume 4, pages 803–806. [Monachesi et al.2007] Paola Monachesi, Gerwert Stevens, and Jantine Trapman. 2007. Adding semantic role annotation to a corpus of written dutch. In Proceedings of the Linguistic Annotation Workshop, LAW ’07, pages 77–84. [Nivre et al.2006] Joakim Nivre, Johan Hall, and Jens Nilsson. 2006. Maltparser: A data-driven parsergenerator for dependency parsing. In Proceedings of LREC, volume 6, pages 2216–2219. 406 [Pad´o and Lapata2009] Sebastian Pad´o and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36(1):307–340. [Pado2007] Sebastian Pado. 2007. Cross-Lingual Annotation Projection Models for Role-Semantic Information. Ph.D. thesis, Saarland University. MP. [Palmer et al.2005] Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational linguistics, 31(1):71–106. [Rafalovitch et al.2009] Alexandre Rafalovitch, Robert Dale, et al. 2009. United nations general assembly resolutions: A six-language parallel corpus. In Proceedings of the MT Summit, volume 12, pages 292– 299. [Schuler2005] Karin Kipper Schuler. 2005. Verbnet: A Broad-coverage, Comprehensive Verb Lexicon. Ph.D. thesis, University of Pennsylvania. [Shen and Lapata2007] Dan Shen and Mirella Lapata. 2007. Using semantic roles to improve question answering. In EMNLP-CoNLL, pages 12–21. Citeseer. [T¨ackstr¨om et al.2013] Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Transactions of the Association for Computational Linguistics, 1:1–12. [Titov and Klementiev2012] Ivan Titov and Alexandre Klementiev. 2012. Crosslingual induction of semantic roles. In ACL, pages 647–656. [Vaidya et al.2011] Ashwini Vaidya, Jinho D Choi, Martha Palmer, and Bhuvana Narasimhan. 2011. Analysis of the hindi proposition bank using dependency structure. In Proceedings of the 5th Linguistic Annotation Workshop, pages 21–29. Association for Computational Linguistics. [Van der Plas et al.2010] Lonneke Van der Plas, Tanja Samardˇzi´c, and Paola Merlo. 2010. Cross-lingual validity of propbank in the manual annotation of french. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 113–117. Association for Computational Linguistics. [Van der Plas et al.2011] Lonneke Van der Plas, Paola Merlo, and James Henderson. 2011. Scaling up automatic cross-lingual semantic role annotation. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 299–304. Association for Computational Linguistics. [Van der Plas et al.2014] Lonneke Van der Plas, Marianna Apidianaki, Rue John von Neumann, and Chenhua Chen. 2014. Global methods for cross-lingual semantic role and predicate labelling. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1279–1290. Association for Computational Linguistics. [Weber1997] George Weber. 1997. Top languages: The world’s 10 most influential languages. Language Today, December. [Xue2006] Nianwen Xue. 2006. Semantic role labeling of nominalized predicates in chinese. In Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, pages 431– 438. Association for Computational Linguistics. [Xue2008] Nianwen Xue. 2008. Labeling chinese predicates with semantic roles. Computational linguistics, 34(2):225–255. [Yarowsky et al.2001] David Yarowsky, Grace Ngai, and Richard Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human language technology research, pages 1–8. Association for Computational Linguistics. [Zaghouani et al.2010] Wajdi Zaghouani, Mona Diab, Aous Mansouri, Sameer Pradhan, and Martha Palmer. 2010. The revised arabic propbank. In Proceedings of the Fourth Linguistic Annotation Workshop, pages 222–226. Association for Computational Linguistics. 407
2015
39
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 31–41, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Statistical Machine Translation Features with Multitask Tensor Networks Hendra Setiawan, Zhongqiang Huang, Jacob Devlin†∗, Thomas Lamar, Rabih Zbib, Richard Schwartz and John Makhoul Raytheon BBN Technologies, 10 Moulton St, Cambridge, MA 02138, USA †Microsoft Research, One Microsoft Way, Redmond, WA 98052, USA {hsetiawa,zhuang,tlamar,rzbib,schwartz,makhoul}@bbn.com [email protected] Abstract We present a three-pronged approach to improving Statistical Machine Translation (SMT), building on recent success in the application of neural networks to SMT. First, we propose new features based on neural networks to model various nonlocal translation phenomena. Second, we augment the architecture of the neural network with tensor layers that capture important higher-order interaction among the network units. Third, we apply multitask learning to estimate the neural network parameters jointly. Each of our proposed methods results in significant improvements that are complementary. The overall improvement is +2.7 and +1.8 BLEU points for Arabic-English and ChineseEnglish translation over a state-of-the-art system that already includes neural network features. 1 Introduction Recent advances in applying Neural Networks to Statistical Machine Translation (SMT) have generally taken one of two approaches. They either develop neural network-based features that are used to score hypotheses generated from traditional translation grammars (Sundermeyer et al., 2014; Devlin et al., 2014; Auli et al., 2013; Le et al., 2012; Schwenk, 2012), or they implement the whole translation process as a single neural network (Bahdanau et al., 2014; Sutskever et al., 2014). The latter approach, sometimes referred to as Neural Machine Translation, attempts to overhaul SMT, while the former capitalizes on the strength of the current SMT paradigm and leverages the modeling power of neural networks to improve the scoring of hypotheses generated ∗* Research conducted when the author was at BBN. by phrase-based or hierarchical translation rules. This paper adopts the former approach, as n-best scores from state-of-the-art SMT systems often suggest that these systems can still be significantly improved with better features. We build on (Devlin et al., 2014) who proposed a simple yet powerful feedforward neural network model that estimates the translation probability conditioned on the target history and a large window of source word context. We take advantage of neural networks’ ability to handle sparsity, and to infer useful abstract representations automatically. At the same time, we address the challenge of learning the large set of neural network parameters. In particular, • We develop new Neural Network Features to model non-local translation phenomena related to word reordering. Large fullylexicalized contexts are used to model these phenomena effectively, making the use of neural networks essential. All of the features are useful individually, and their combination results in significant improvements (Section 2). • We use a Tensor Neural Network Architecture (Yu et al., 2012) to automatically learn complex pairwise interactions between the network nodes. The introduction of the tensor hidden layer results in more powerful features with lower model perplexity and significantly improved MT performance for all of neural network features (Section 3). • We apply Multitask Learning (MTL) (Caruana, 1997) to jointly train related neural network features by sharing parameters. This allows parameters learned for one feature to benefit the learning of the other features. This results in better trained models and achieves additional MT improvements (Section 4). We apply the resulting Multitask Tensor Networks to the new features and to existing ones, 31 obtaining strong experimental results over the strongest previous results of (Devlin et al., 2014). We obtain improvements of +2.5 BLEU points for Arabic-English and +1.8 BLEU points for Chinese-English on the DARPA BOLT Web Forum condition. We also obtain improvements of +2.7 BLEU point for Arabic-English and +1.9 BLEU points for Chinese-English on the NIST Open12 test sets over the best previously published results in (Devlin et al., 2014). Both the tensor architecture and multitask learning are general techniques that are likely to benefit other neural network features. 2 New Non-Local SMT Features Existing SMT features typically focus on local information in the source sentence, in the target hypothesis, or both. For example, the n-gram language model (LM) predicts the next target word by using previously generated target words as context (local on target), while the lexical translation model (LTM) predicts the translation of a source word by taking into account surrounding source words as context (local on source). In this work, we focus on non-local translation phenomena that result from non-monotone reordering, where local context becomes non-local on the other side. We propose a new set of powerful MT features that are motivated by this simple idea. To facilitate the discussion, we categorize the features into hypothesis-enumerating features that estimates a probability for each generated target word (e.g., n-gram language model), and sourceenumerating features that estimates a probability for each source word (e.g., lexical translation). More concretely, we introduce a) Joint Model with Offset Source Context (JMO), a hypothesis enumerating feature that predicts the next target word the source context affiliated to the previous target words; and b) Translation Context Model (TCM), a source-enumerating feature that predicts the context of the translation of a source word rather than the translation itself. These two models extend pre-existing features: the Joint (language and translation) Model (JM) of (Devlin et al., 2014) and the LTM respectively respectively. We use a large lexicalized context for there features, making the choice of implementing them as neural networks essential. We also present neuralnetwork implementations of pre-existing sourceenumerating features: lexical translation, orientation and fertility models. We obtain additional gains from using tensor networks and multitask learning in the modeling and training of all the features. 2.1 Hypothesis-Enumerating Features As mentioned, hypothesis-enumerating features score each word in the hypothesis, typically by conditioning it on a context of n-1 previous target words as in the n-gram language model. One recent such model, the joint model of Devlin et al. (2014) achieves large improvements to the stateof-the-art SMT by using a large context window of 11 source words and 3 target words. The Joint Model with Offset Source Context (JMO) is an extension of the JM that uses the source words affiliated with the n-gram target history as context. The source contexts of JM and JMO overlap highly when the translation is monotone, but are complementary when the translation requires word reordering. 2.1.1 Joint Model with Offset Source Context Formally, JMO estimates the probability of the target hypothesis E conditioned on the source sentence F and a target-to-source affiliation A: P(E|F, A) ≈ |E| Y i=1 P(ei|ei−n+1 i−1 , Cai−k = fai−k+m ai−k−m ) where ei is the word being predicted; ei−n+1 i−1 is the string of n −1 previously generated words; Cai−k to the source context of m source words around fai−k, the source word affiliated with ei−k. We refer to k as the offset parameter. We use the definition of word affiliation introduced in Devlin et al. (2014). When no source context is used, the model is equivalent to an n-gram language model, while an offset parameter of k = 0 reduces the model to the JM of Devlin et al. (2014). When k > 0, the JMO captures non-local context in the prediction of the next target word. More specifically, ei−k and ei, which are local on the target side, are affiliated to fai−k and fai which may be distant from each other on the source side due to non-monotone translation, even for k = 1. The offset model captures reordering constraints by encouraging the predicted target word ei to fit well with the previous affiliated source word fai−k and its surrounding words. We implement a separate feature for each value of k, and later train 32 them jointly via multitask learning. As our experiments in Section 5.2.1 confirm, the historyaffiliated source context results in stronger SMT improvement than just increasing the number of surrounding words in JM. Fig. 1 illustrates the difference between JMO and JM. Assuming n = 3 and m = 1, then JM estimates P(e5|e4, e3, Ca5 = {f6, f7, f8}). On the other hand, for k = 1 , JMOk=1 estimates P(e5|e4, e3, Ca4 = {f8, f9, f10}). f9 f5 . . . e5 e6 e4 e7 e3 . . . . . . C7 = Ca5 . . . z }| { f6 f7 f8 Figure 1: Example to illustrate features. f9 5 is the source segment, e7 3 is the corresponding translation and lines refer to the alignment. We show hypothesis-enumerating features that look at f7 and source-enumerating features that look at e5. We surround the source words affiliated with e5 and its n-gram history with a bracket, and surround the source words affiliated with the history of e5 with squares. 2.2 Source-Enumerating Features Source-Enumerating Features iterate over words in the source sentence, including unaligned words, and assign it a score depending on what aspect of translation they are modeling. A sourceenumerating feature can be formulated as follows: P(E|F, A) ≈ |F| Y j=1 P(Yj|Cj = fj+m j−m ) where Caj is the source context (similar to the hypothesis-enumerating features above) and Yj is the label being predicted by the feature. We first describe pre-existing source-enumerating features: the lexical translation model, the orientation model and the fertility model, and then discuss a new feature: Translation Context Model (TCM), which is an extension of the lexical translation model. 2.2.1 Pre-existing Features Lexical Translation model (LTM) estimates the probability of translating a source word fj to a target word l(fj) = ebj given a source context Cj, bj ∈B is the source-to-target word affiliation as defined in (Devlin et al., 2014). When fj is translated to more than one word, we arbitrarily keep the left-most one. The target word vocabulary V is extended with a NULL token to accommodate unaligned source words. Orientation model (ORI) describes the probability of orientation of the translation of phrases surrounding a source word fj relative to its own translation. We follow (Setiawan et al., 2013) in modeling the orientation of the left and right phrases of fj with maximal orientation span (the longest neighboring phrase consistent with alignment), which we denote by Lj and Rj respectively. Thus, o(fj) = ⟨oLj(fj), oRj(fj)⟩, where oLj and oRj refer to the orientation of Lj and Rj respectively. For unaligned fj, we set o(fj) = oLj(Rj), the orientation of Rj with respect to Lj. Fertility model (FM) models the probability that a source word fj generates φ(fj) words in the hypothesis. Our implemented model only distinguishes between aligned and unaligned source words (i.e., φ(fj) ∈{0, 1}). The generalization of the model to account for multiple values of φ(fi) is straightforward. 2.2.2 Translation Context Model As with JMO in Section 2.1.1, we aim to capture translation phenomena that appear local on the target hypothesis but non-local on the source side. Here, we do so by extending the LTM feature to predict not only the translated word ebj, but also its surrounding context. Formally, we model P(l(fj)|Cj), where l(fj) = ebj−d, · · · , ebj, · · · ebj+d is the hypothesis word window around ebj. In practice, we decompose TCM further into +d Q d′=−d P(ebj+d′|Cj) and implemented each as a separate neural network-based feature. Note that TCM is equivalent to the LTM when d = 0. Because of word reordering, a given hypothesis word in l(fj) might not be affiliated with fj or even to the words in Cj. TCM can model non-local information in this way. 2.2.3 Combined Model Since the feature label is undefined for unaligned source words, we make the model hierarchical, based on whether the source word is aligned or 33 not, and thus arrive at the following formulation: P(l(fj)) · P(ori(fj)) · P(φ(fj)) =            P(φp(fj) = 0) · P(oLj(Rj)) P(φp(fj) ≥1) · +d Q d′=−d P(ebj+d′) ·P(oLj(fj), oRj(fj)) We dropped the common context (Cj) for readability. We reuse Fig. 1 to illustrate the sourceenumerating features. Assuming d = 1, the scores associated with f7 are P(φ(f7) ≥1|C7) for the FM; P(e4|C7)·P(e5|C7)·P(e6)|C7) for the TCM; and P(o(f7) = ⟨oL7(f7) = RA, oR7(f7) = RA⟩) for the ORI(RA refers to Reverse Adjacent). L7 and R7 (i.e. f6 and f9 8 respectively), the longest neighboring phrase of f7, are translated in reverse order and adjacent to e5. 3 Tensor Neural Networks The second part of this work improves SMT by improving the neural network architecture. Neural Networks derive their strength from their ability to learn a high-level representation of the input automatically from data. This high-level representation is typically constructed layer by layer through a weighted sum linear operation and a non-linear activation function. With sufficient training data, neural networks often achieve state-of-the-art performance on many tasks. This stands in sharp contrast to other algorithms that require tedious manual feature engineering. For the features presented in this paper, the context words are fed to the network network with minimal engineering. We further strengthen the network’s ability to learn rich interactions between its units by introducing tensors in the hidden layers. The multiplicative property of the tensor bares a close resemblance to collocation of context words which are useful in many natural language processing tasks. In conventional feedforward neural networks, the output of hidden layer l is produced by multiplying the output vector from the previous layer with a weight matrix (Wl) and then applying the activation function σ to the product. Tensor Neural Networks generalize this formulation by using a tensor Ul of order 3 for the weights. The output of node k in layer l is computed as follows: hl[k] = σ hl−1 · Ul[k] · hT l−1  where Ul[k], the k-th slice of Ul, is a square matrix. In our implementation, we follow (Yu et al., 2012; Hutchinson et al., 2013) and use a low-rank approximation of Ul[k] = Ql[k] · Rl[k]T , where Ql[k], Rl[k] ∈Rn×r. The output of node k becomes: hl[k] = σ hl−1 · Ql[k] · Rl[k]T · hT l−1  In our experiments, we choose r = 1, and also apply the non-linear activation function σ distributively. We arrive at the following three equations for computing the hidden layer outputs (0 < l < L): vl = σ (hl−1 · Ql) v′ l = σ (hl−1 · Rl) hl = vl ⊗v′ l where hl−1 is double-projected to vl and v′ l, and the two projections are merged using the Hadamard element-wise product operator ⊗. This formulation allows us to use the same infrastructure of the conventional neural networks by projecting the previous layer to two different spaces of the same dimensions, then multiplying them element-wise. The only component that is different from conventional feedforward neural networks is the multiplicative function, which is trivially differentiable with respect to the learnable parameters. Figure 3(b) illustrates the tensor architecture for two hidden layers. The tensor network can learn collocation features more easily. For example, it can learn a collocation feature that is activated only if hl−1[i] collocates with hl−1[j] by setting Ul[k][i][j] to some positive number. This results in SMT improvements as we describe in Section 5. 4 Multitask Learning The third part of this paper addresses the challenge of effectively learning a large number of neural network parameters without overfitting. The challenge is even larger for tensor network since they practically doubles the number of parameters. In this section, we propose to apply Multitask Learning (MTL) to partially address this issue. We implement MTL as parameter sharing among the networks. This effectively reduces the number of parameters, and more importantly, it takes advantage of parameters learned for one feature to better 34 Input h1 h2 Input h1 ⊗ v1 v′ 1 h2 ⊗ v2 v′ 2 Input h1 ⊗ v1 v′ 1 h1 2 ⊗ v1 2 v′1 2 hM 2 ⊗ vM 2 v′M 2 · · · · · · W1 W2 Q1 R1 R2 Q2 R1 Q1 Q1 2 R1 2 QM 2 RM 2 (a) (b) (c) Output Output Task 1 Task M Figure 2: The network architecture for (a) a conventional feedforward neural network, (b) tensor hidden layers, and (c) multitask learning with M features that share the embedding and first hidden layers (t = 1). learn the parameters of the other features. Another way of looking at this is that MTL facilitates regularization through learning the other tasks. MTL is suitable for SMT features as they model different but closely related aspects of the same translation process. MTL has long been used by the wider machine learning community (Caruana, 1997) and more recently for natural language processing (Collobert and Weston, 2008; Collobert et al., 2011). The application of MTL to machine translation, however, has been much less restricted, which is rather surprising since SMT features arise from the same translation task and are naturally related. We apply MTL for the features described in Section 2. We design all the features to also share the same neural network architecture (in this case, the tensor architecture described in Section 3) and the same input, thus resulting in two large neural networks: one for the hypothesis-enumerating features and another for the source-enumerating ones. This simplifies the implementation of MTL. Using this setup, it is possible to vary the number of shared hidden layers t from 0 (only sharing the embedding layer) to L −1 (sharing all the layers except the output). Note that in principle MTL is applicable to other set of networks that have different architecture or even different input set. With MTL, the training procedure is the same as that of standard neural networks. We use the back propagation algorithm, and use as the loss function the product of likelihood of each feature1: 1In this and in the other parts of the paper, we add the normalization regularization term described in (Devlin et al., 2014) to the loss function to avoid computing the normalization constant at model query/decoding time. Loss = X i M X j log (P (Yj(Xi))) where Xi is the training sample and Yj is one of the M models trained. We use the sum of log likelihoods since we assume that the features are independent. Fig. 3(c) illustrates MTL between M models sharing the input embedding layer and the first hidden layer (t = 1) compared to the separatelytrained conventional feedforward neural network and tensor neural network. 5 Experiments We demonstrate the impact of our work with extensive MT experiments on Arabic-English and Chinese-English translation for the DARPA BOLT Web Forum and the NIST OpenMT12 conditions. 5.1 Baseline MT System We run our experiments using a state-of-the-art string-to-dependency hierarchical decoder (Shen et al., 2010). The baseline we use includes a set of powerful features as follow: • Forward and backward rule probabilities • Contextual lexical smoothing (Devlin, 2009) • 5-gram Kneser-Ney LM • Dependency LM (Shen et al., 2010) • Length distribution (Shen et al., 2010) • Trait features (Devlin and Matsoukas, 2012) • Factored source syntax (Huang et al., 2013) • Discriminative sparse feature, totaling 50k features (Chiang et al., 2009) • Neural Network Joint Model (NNJM) and Neural Network Lexical Translation Model 35 (NNLTM) (Devlin et al., 2014) As shown, our baseline system already includes neural network-based features. NNJM, NNLTM and use two hidden layers with 500 units and use embedding of size 200 for each input. We use the MADA-ARZ tokenizer (Habash et al., 2013) for Arabic word tokenization. For Chinese tokenization, we use a simple longest-matchfirst lexicon-based approach. We align the training data using GIZA++ (Och and Ney, 2003). For tuning the weights of MT features including the new features, we use iterative k-best optimization with an ExpectedBLEU objective function (Rosti et al., 2010), and decode the test sets after 5 tuning iteration. We report the lower-cased BLEU and TER scores. 5.2 BOLT Discussion Forum The bulk of our experiments is on the BOLT Web Discussion Forum domain, which uses data collected by the LDC. The parallel training data consists of all of the high-quality NIST training corpora, plus an additional 3 million words of translated forum data. The tuning and test sets consist of roughly 5000 segments each, with 2 independent references for Arabic and 3 for Chinese. 5.2.1 Effects of New Features We first look at the effects of the proposed features compared to the baseline system. Table 1 summarizes the primary results of the Arabic-English and Chinese-English experiments for the BOLT condition. We show the experimental results related to hypothesis-enumerating features (HypEn) in rows S2-S5, those related to source-enumerating features (SrcEn) in rows S6-S9, and the combination of the two in row S10. For all the features, we set the source context length to m = 5 (11-word window). For JM and JMO, we set the target context length to n = 4. For the offset parameter k of JMO, we use values 1 to 3. For TCM, we model one word around the translation (d = 1). Larger values of d did not result in further gains. The baseline is comparable to the best results of (Devlin et al., 2014). In rows S3 to S5, we incrementally add a model with different offset source context, from k = 1 to k = 3. For AR-EN, adding JMOs with different offset source context consistently yields positive effects in BLEU score, while in ZH-EN, it yields positive effects in TER score. Utilizing all offset source contexts “+JMOk≤3” (row S5) yields around 0.9 BLEU point improvement in AR-EN and around 0.3 BLEU in ZH-EN compared to the baseline. The JMO consistently provides better improvement compared to a larger JM context (row S2), validating our hypothesis that using offset source context captures important non-local context. Rows S6 to S9 present the improvements that result from implementing pre-existing sourceenumerating SMT features as neural networks, and highlight the contribution of our translation context model (TCM). This set of experiments is orthogonal to the HypEn experiments (rows S2S5). Each pre-existing model has a modest positive cumulative effect on both BLEU and TER. We see this result as further confirming the current trend of casting existing SMT features as neural network since our baseline already contains such features. The next row present the results of adding the translation context model, with one word surrounding the translation (d = 1). As shown, TCM yields a positive effect of around 0.5 BLEU and TER improvements in AR-EN and around 0.2 BLEU and TER improvements in ZHEN. Separately, the set of source-enumerating features and the set of target-enumerating features produce around 1.1 to 1.2 points BLEU gain in AR-EN and 0.3 to 0.5 points BLEU gain in ZHEN. The combination of the two sets produces a complementary gain in addition to the gains of the individual models as Row (S10) shows. The combined gain improves to 1.5 BLEU points in AREN and 0.7 BLEU points in ZH-EN. System AR-EN ZH-EN BL TER BL TER S1: Baseline 43.2 45.0 30.2 58.3 S2: S1+JMLC8 43.5 45.0 30.2 58.5 S3: S1+JMOk=1 43.9 44.7 30.8 57.8 S4: S3+JMOk=2 43.9 44.7 30.7 57.8 S5: S4+JMOk=3 44.4 44.5 30.5 57.5 S6: S1+LTM 43.5 44.7 30.3 58.0 S7: S6+ORI 43.7 44.6 30.4 57.8 S8: S7+FERT 43.8 44.7 30.5 57.8 S9: S8+TCM 44.3 44.2 30.7 57.5 S10: S9+JMOk≤3 44.7 44.1 30.9 57.3 Table 1: MT results of various model combination in BLEU and in TER. 36 5.2.2 Effects of Tensor Network and Multitask Learning We first analyze the impact of tensor architecture and MTL intrinsically by reporting the models’ average log-likelihood on the validation sets (a subset of the test set) in Table 2. As mentioned, we group the models to HypEn (JM and JMOk≤3) and SrcEn (LTM, ORI,FERT and TCM) as we perform MTL on these two groups. Likelihood of these two groups in the previous subsection are in column “NN” (for Neural Network), which serves as a baseline. The application of the tensor architecture improves their likelihood as shown in column “Tensor” for both languages and models. Feat. Independent MTL NN Tensor t = 0 t = 1 L = 2 L = 3 AR HypEn -8.85 -8.54 -8.35 SrcEn -8.47 -8.32 -8.10 -8.09 ZH HypEn -11.48 -11.06 -10.87 SrcEn -10.77 -10.66 -10.54 -10.49 Table 2: Sum of the average log-likelihood of the models in HypEn and SrcEn. t = 0 refers to MTL that shares only the embedding layer, while t = 1 shares the first hidden layer as well. L refers to the network’s depth. Higher value is better. The likelihoods of the MTL-related experiments are in columns with “MTL” header. We present two set of results. In the first set (column “MTL,t=0,L=2”), we run MTL for features from column “Tensor” by sharing the embedding layer only (t = 0). This allows us to isolate the impact of MTL in the presence of Tensors. Column “MTL,t=1,l=3” corresponds to the experiment that produces the best intrinsic result, where each model uses Tensors with three hidden layers (500x500x500, l = 3) and the models share the embedding and the first hidden layers (t = 1). MTL consistently gives further intrinsic gain compared to tensors. More sharing provides an extra gain for SrcEn as shown in the last column. Note that we only experiment with different l and t for SrcEn and not for HypEn because the models in HypEn have different input sets. In our experiments, further sharing and more hidden layers resulted in no further gain. In total, we see a consistent positive effect in intrinsic evaluation from the tensor networks and multitask learning. Moving on to MT evaluation, we summarize the experiments showing the impact of Tensors and MTL in Table 3. For MTL, we use L = 3, t = 2 since it gives the best intrinsic score. Employing tensors instead of regular neural networks gives a significant and consistent positive impact for all models and language pairs. For the system with the baseline features, we use the tensor architecture for both the joint model and the lexical translation model of Devlin et al. resulting in an improvement of around 0.7 BLEU points, and showing the wide applicability of the tensor architecture. On top of this improved baseline, we also observe an improvement of the same scale for other models (column “Tensor”), except for HypEn features in AR-EN experiment. Moving to MTL experiments, we see improvements, especially from SrcEn features. MTL gives around 0.5 BLEU point improvement for AR-EN and around 0.4 BLEU point for ZH-EN. When we employ both HypEn and SrcEn together, MTL gives around 0.4 BLEU point in AR-EN and 0.2 BLEU point in ZH-EN. In total, our work results in an improvement of 2.5 BLEU point for AR-EN and 1.8 for BLEU point in ZH-EN on top of the best results in (Devlin et al., 2014). 5.3 NIST OpenMT12 Our NIST system is compatible with the OpenMT12 constrained track, which consists of 10M words of high-quality parallel training for Arabic, and 25M words for Chinese. The n-gram LM is trained on 5B words of data from the English GigaWord. For test, we use the “Arabic-ToEnglish Original Progress Test” (1378 segments) and “Chinese-to-English Original Progress Test + OpenMT12 Current Test” (2190 segments), which consists of a mix of newswire and web data. All test segments have 4 references. Our tuning set contains 5000 segments, and is a mix of the MT02-05 eval set as well as additional held-out parallel data from the training corpora. We report the experiments for the NIST condition in Table 4. In particular, we investigate the impact of deploying our new features (column “Feat”) and demonstrate the effects of the tensor architecture (column “Tensor”) and multitask learning (column “MTL”). As shown the results are inline with the BOLT condition where we observe additive improvements from adding our new features, applying tensor network and multitask learning. On Arabic-English, we see a gain of 2.7 37 Feature set AR-EN ZH-EN NN Tensor MTL NN Tensor MTL R1: Baseline Features 43.2 43.9 30.2 30.8 R2: R1 + HypEn 44.4 44.4 44.5 30.5 31.5 31.3 R3: R1 + SrcEn 44.3 44.9 45.5 30.7 31.5 31.9 R4: R1 + HypEn + SrcEn 44.7 45.3 45.7 30.9 31.8 32.0 Table 3: Experimental results to investigate the effects of the new features, DTN and MTL. The top part shows the BOLT results, while the bottom part shows the NIST results. The best results for each conditions and each language-pair are in bold. The baselines are in italics. . Base. Feat Tensor MTL AR-EN 53.7 55.4 55.9 56.4 mixed-case 51.8 53.1 53.7 54.1 ZH-EN 36.6 37.8 38.2 38.5 mixed-case 34.4 35.5 35.9 36.1 Table 4: Experimental results for the NIST condition. Mixed-case scores are also reported. Baselines are in italics. BLEU point and on Chinese-English, we see a 1.9 BLEU point gain. We also report the mixed-cased BLEU scores for comparison with previous best published results, i.e. Devlin et al. (2014) report 52.8 BLEU for Arabic-English and 34.7 BLEU for Chinese-English. Thus, our results are around 1.31.4 BLEU point better. Note that they use additional rescoring features but we do not. 6 Related Work Our work is most closely related to Devlin et al. (2014). They use a simple feedforward neural network to model two important MT features: A joint language and translation model, and a lexical translation model. They show very large improvements on Arabic-English and ChineseEnglish web forum and newswire baselines. We improve on their work in 3 aspects. First, we model more features using neural networks, including two novel ones: a joint model with offset source context and a translation context model. Second, we enhance the neural network architecture by using tensor layers, which allows us to model richer interactions. Lastly, we improve the performance of the individual features by training them using multitask learning. In the remainder of this section, we describe previous work relating to the three aspect of our work, namely MT modeling, neural network architecture and model learning. The features we propose in this paper address the major aspects of SMT modeling that have informed much of the research since the original IBM models (Brown et al., 1993): lexical translation, reordering, word fertility, and language models. Of particular relevance to our work are approaches that incorporate context-sensitivity into the models (Carpuat and Wu, 2007), formulate reordering as orientation prediction task (Tillman, 2004) and that use neural network language models (Bengio et al., 2003; Schwenk, 2010; Schwenk, 2012), and incorporate source-side context into them (Devlin et al., 2014; Auli et al., 2013; Le et al., 2012; Schwenk, 2012). Approaches to incorporating source context into a neural network model differ mainly in how they represent the source sentence and in how long is the history they keep. In terms of representation of the source sentence, we follow (Devlin et al., 2014) in using a window around the affiliated source word. To name some other approaches, Auli et al. (2013) uses latent semantic analysis and source sentence embeddings learned from the recurrent neural network; Sundermeyer et al. (2014) take the representation from a bidirectional LSTM recurrent neural network; and Kalchbrenner and Blunsom (2013) employ a convolutional sentence model. For target context, recent work has tried to look beyond the classical n-gram history. (Auli et al., 2013; Sundermeyer et al., 2014) consider an unbounded history, at the expense of making their model only applicable for N-best rescoring. Another recent line of research (Bahdanau et al., 2014; Sutskever et al., 2014) departs more radically from conventional feature-based SMT and implements the MT system as a single neural network. These models use a representation of the whole input sentence. We use a feedforward neural network in this work. Besides feedforward and recurrent net38 works, other network architectures that have been applied to SMT include convolutional networks (Kalchbrenner et al., 2014) and recursive networks (Socher et al., 2011). The simplicity of feedforward networks works to our advantage. More specifically, due to the absence of a feedback loop, the feedforward architecture allows us to treat individual decisions independently, which makes parallelization of the training easy and the querying the network at decoding time straightforward. The use of tensors in the hidden layers strengthens the neural network model, allowing us to model more complex feature interactions like collocation, which has been long recognized as important information for many NLP tasks (e.g. word sense disambiguation (Lee and Ng, 2002)). The tensor formulation we use is similar to that of (Yu et al., 2012; Hutchinson et al., 2013). Tensor Neural Networks have a wide application in other field, but have only been recently applied in NLP (Socher et al., 2013; Pei et al., 2014). To our knowledge, our work is the first to use tensor networks in SMT. Our approach to multitask learning is related to work that is often labeled joint training or transfer learning. To name a few of these works, Finkel and Manning (2009) successfully train name entity recognizers and syntactic parsers jointly, and Singh et al. (2013) train models for coreference resolution, named entity recognition and relation extraction jointly. Both efforts are motivated by the minimization of cascading errors. Our work is most closely related to Collobert and Weston (2008; Collobert et al. (2011), who apply multitask learning to train neural networks for multiple NLP models: part-of-speech tagging, semantic role labeling, named-entity recognition and language model variations. 7 Conclusion This paper argues that a relatively simple feedforward neural network can still provides significant improvement to Statistical Machine Translation (SMT). We support this argument by presenting a multi-pronged approach that addresses modeling, architectural and learning aspects of pre-existing neural network-based SMT features. More concretely, we paper present a new set of neural network-based SMT features to capture important translation phenomena, extend feedforward neural network with tensor layers, and apply multitask learning to integrate the SMT features more tightly. Empirically, all our proposals successfully produce an improvement over state-of-the-art machine translation system for Arabic-to-English and Chinese-to-English and for both BOLT web forum and NIST conditions. Building on the success of this paper, we plan to develop other neuralnetwork-based features, and to also relax the limiteation of current rule extraction heuristics by generating translations word-by-word. Acknowledgement This work was supported by DARPA/I2O Contract No. HR0011-12-C-0014 under the BOLT Program. The views, opinions, and/or findings contained in this article are those of the author and should not be interpreted as representing the official views or policies, either expressed or implied, of the Defense Advanced Research Projects Agency or the Department of Defense. References Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044– 1054, Seattle, Washington, USA, October. Association for Computational Linguistics. Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2014. Neural machine translation by jointly learning to align and translate. Technical Report 1409.0473, arXiv. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Comput. Linguist., 19(2):263– 311, June. Marine Carpuat and Dekai Wu. 2007. Improving statistical machine translation using word sense disambiguation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), pages 61–72, Prague, Czech Republic, June. Association for Computational Linguistics. Rich Caruana. 1997. Multitask learning. Machine Learning, 28(1):41–75. 39 David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine translation. In HLT-NAACL, pages 218–226. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, pages 160–167, New York, NY, USA. ACM. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. J. Mach. Learn. Res., 12:2493–2537, November. Jacob Devlin and Spyros Matsoukas. 2012. Traitbased hypothesis selection for machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 528–532, Stroudsburg, PA, USA. Association for Computational Linguistics. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1370–1380, Baltimore, Maryland, June. Association for Computational Linguistics. Jacob Devlin. 2009. Lexical features for statistical machine translation. Master’s thesis, University of Maryland. Jenny Rose Finkel and Christopher D. Manning. 2009. Joint parsing and named entity recognition. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 326–334, Boulder, Colorado, June. Association for Computational Linguistics. Nizar Habash, Ryan Roth, Owen Rambow, Ramy Eskander, and Nadi Tomeh. 2013. Morphological analysis and disambiguation for dialectal arabic. In HLT-NAACL, pages 426–432. Zhongqiang Huang, Jacob Devlin, and Rabih Zbib. 2013. Factored soft source syntactic constraints for hierarchical machine translation. In EMNLP, pages 556–566. Brian Hutchinson, Li Deng, and Dong Yu. 2013. Tensor deep stacking networks. IEEE Trans. Pattern Anal. Mach. Intell., 35(8):1944–1957, August. Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1700–1709, Seattle, Washington, USA, October. Association for Computational Linguistics. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 655–665, Baltimore, Maryland, June. Association for Computational Linguistics. Hai-Son Le, Alexandre Allauzen, and Franc¸ois Yvon. 2012. Continuous space translation models with neural networks. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 39– 48, Stroudsburg, PA, USA. Association for Computational Linguistics. Yoong Keok Lee and Hwee Tou Ng. 2002. An empirical evaluation of knowledge sources and learning algorithms for word sense disambiguation. In Proceedings of the ACL-02 Conference on Empirical Methods in Natural Language Processing - Volume 10, EMNLP ’02, pages 41–48, Stroudsburg, PA, USA. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Wenzhe Pei, Tao Ge, and Baobao Chang. 2014. Maxmargin tensor neural network for chinese word segmentation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 293–303, Baltimore, Maryland, June. Association for Computational Linguistics. Antti Rosti, Bing Zhang, Spyros Matsoukas, and Rich Schwartz. 2010. BBN system description for WMT10 system combination task. In WMT/MetricsMATR, pages 321–326. Holger Schwenk. 2010. Continuous-space language models for statistical machine translation. Prague Bull. Math. Linguistics, 93:137–146. Holger Schwenk. 2012. Continuous space translation models for phrase-based statistical machine translation. In COLING (Posters), pages 1071–1080. Hendra Setiawan, Bowen Zhou, Bing Xiang, and Libin Shen. 2013. Two-neighbor orientation model with cross-boundary global contexts. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1264–1274, Sofia, Bulgaria, August. Association for Computational Linguistics. Libin Shen, Jinxi Xu, and Ralph Weischedel. 2010. String-to-dependency statistical machine translation. Computational Linguistics, 36(4):649–671, December. 40 Sameer Singh, Sebastian Riedel, Brian Martin, Jiaping Zheng, and Andrew McCallum. 2013. Joint inference of entities, relations, and coreference. In Proceedings of the 2013 Workshop on Automated Knowledge Base Construction, AKBC ’13, pages 1– 6, New York, NY, USA. ACM. Matthew Snover, Bonnie Dorr, and Richard Schwartz. 2008. Language and translation model adaptation using comparable corpora. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 857–866, Stroudsburg, PA, USA. Association for Computational Linguistics. Richard Socher, Cliff C. Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing Natural Scenes and Natural Language with Recursive Neural Networks. In Proceedings of the 26th International Conference on Machine Learning (ICML). Richard Socher, Danqi Chen, Christopher D Manning, and Andrew Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 26, pages 926–934. Curran Associates, Inc. Martin Sundermeyer, Tamer Alkhouli, Joern Wuebker, and Hermann Ney. 2014. Translation modeling with bidirectional recurrent neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 14–25, Doha, Qatar, October. Association for Computational Linguistics. Ilya Sutskever, Oriol Vinyals, and Quoc V. V Le. 2014. Sequence to sequence learning with neural networks. In Z. Ghahramani, M. Welling, C. Cortes, N.D. Lawrence, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 27, pages 3104–3112. Curran Associates, Inc. Christoph Tillman. 2004. A unigram orientation model for statistical machine translation. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLT-NAACL 2004: Short Papers, pages 101– 104, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Dong Yu, Li Deng, and Frank Seide. 2012. Large vocabulary speech recognition using deep tensor neural networks. In INTERSPEECH. ISCA. 41
2015
4
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 408–418, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Aligning Opinions: Cross-Lingual Opinion Mining with Dependencies Mariana S. C. Almeida∗† Cl´audia Pinto∗ Helena Figueira∗ Pedro Mendes∗ Andr´e F. T. Martins∗† ∗Priberam Labs, Alameda D. Afonso Henriques, 41, 2o, 1000-123 Lisboa, Portugal †Instituto de Telecomunicac¸˜oes, Instituto Superior T´ecnico, 1049-001 Lisboa, Portugal {mla,atm}@priberam.pt Abstract We propose a cross-lingual framework for fine-grained opinion mining using bitext projection. The only requirements are a running system in a source language and word-aligned parallel data. Our method projects opinion frames from the source to the target language, and then trains a system on the target language using the automatic annotations. Key to our approach is a novel dependency-based model for opinion mining, which we show, as a byproduct, to be on par with the current state of the art for English, while avoiding the need for integer programming or reranking. In cross-lingual mode (English to Portuguese), our approach compares favorably to a supervised system (with scarce labeled data), and to a delexicalized model trained using universal tags and bilingual word embeddings. 1 Introduction The goal of opinion mining is to extract opinions and sentiments from text (Pang and Lee, 2008; Wilson, 2008; Liu, 2012). With the advent of social media and the increasing amount of data available on the Web, this has become a very active area of research, with applications in summarization of customer reviews (Hu and Liu, 2004; Wu et al., 2011), tracking of newswire and blogs (Ku et al., 2006), question answering (Yu and Hatzivassiloglou, 2003), and text-to-speech synthesis (Alm et al., 2005). While early work has focused on determining sentiment at document and sentence level (Pang et al., 2002; Turney, 2002; Balog et al., 2006), research has gradually progressed towards finegrained opinion mining, where rather than determining global sentiment, the goal is to parse text into opinion frames, identifying opinion expressions, agents, targets, and polarities (Ding et al., 2008), or addressing compositionality (Socher et al., 2013b). Since the release of the MPQA corpus1 (Wiebe et al., 2005; Wilson, 2008), a standard corpus for fine-grained opinion mining of news documents, a long string of work has been produced (reviewed in §2). Despite the large volume of prior work, opinion mining has by and large been limited to monolingual approaches in English.2 This is explained by the heavy effort of annotation necessary for current learning-based approaches to succeed, which delays the deployment of opinion miners for new languages. We bridge the existing gap by proposing a cross-lingual approach to fine-grained opinion mining via bitext projection. This technique has been quite effective in several NLP tasks, such as part-of-speech (POS) tagging (T¨ackstr¨om et al., 2013), named entity recognition (Wang and Manning, 2014), syntactic parsing (Yarowsky and Ngai, 2001; Hwa et al., 2005), semantic role labeling (Pad´o and Lapata, 2009), and coreference resolution (Martins, 2015). Given a corpus of parallel sentences (bitext), the idea is to run a pre-trained system on the source side and then to use word alignments to transfer the produced annotations to the target side, creating an automatic training corpus for the impoverished language. To alleviate the complexity of the task, we start by introducing a lightweight representation— called dependency-based opinion mining—and convert the MPQA corpus to this formalism (§3). We propose a simple arc-factored model that permits easy decoding (§4) and we show that, despite 1http://mpqa.cs.pitt.edu/corpora/mpqa_ corpus. 2Besides English, monolingual systems have also been developed for Chinese and Japanese (Seki et al., 2007), German (Clematide et al., 2012) and Bengali (Das and Bandyopadhyay, 2010). 408 its simplicity, this model is on par with state-ofthe-art opinion mining systems for English (§5). Then, through bitext projection, we transfer these dependency-based opinion frames to Portuguese (our target language), and train a system on the resulting corpus (§6). As part of this work, a validation corpus in Portuguese with subjectivity annotations was created, along with a translation of the MPQA Subjectivity lexicon of Wilson et al. (2005).3 Experimental evaluation (§7) shows that our cross-lingual approach surpasses a supervised system trained on a small corpus in the target language, as well as a delexicalized baseline trained using universal POS tags, bilingual word embeddings and a projected lexicon. 2 Related Work A considerable amount of work on fine-grained opinion mining is based on the MPQA corpus. Kim and Hovy (2006) proposed a method for finding opinion holders and topics, with the aid of a semantic role labeler. Choi et al. (2005) and Breck et al. (2007) used CRFs for finding opinion holders and recognizing opinion expressions, respectively. The two things are predicted jointly by Choi et al. (2006), with integer programming, and Johansson and Moschitti (2010), via reranking. The same method was applied later for joint prediction of opinion expressions and their polarities (Johansson and Moschitti, 2011). The advantage of a joint model was also shown by Choi and Cardie (2010) and Yang and Cardie (2014). Yang and Cardie (2012) classified expressions with a semiMarkov decoder, outperforming a B-I-O tagger; in later work, the same authors proposed an ILP decoder to jointly retrieve opinion expressions, holders, and targets (Yang and Cardie, 2013). A more recent work (˙Irsoy and Cardie, 2014) proposes a recurrent neural network to identify opinion spans. All the approaches above rely on a span-based representation of the opinion elements. This makes joint decoding procedures more complicated, since they must forbid overlap of opinion elements or add further constraints, leading to integer programming or reranking strategies. Besides, there is little consensus about what should be the correct span boundaries, the inter-annotator agreement being quite low (Wiebe et al., 2005). In 3The Portuguese corpus and the lexicon are available at http://labs.priberam.com/Resources. constrast, we use dependencies to model opinion elements and relations, leading to a compact representation that does not depend on spans and which is tractable to decode. A dependency scheme was also used by Wu et al. (2011) for fine-grained opinion mining. Our work differs in which we mine opinions in news articles instead of product reviews, a considerably different task. In addition, the approach of Wu et al. (2011) relies on “span nodes” (instead of head words), requiring solving an ILP followed by an approximate heuristic. Query-based multilingual opinion mining was addressed in several NTCIR shared tasks (Seki et al., 2007; Seki et al., 2010).4 However, to our best knowledge, a cross-lingual approach has never been attempted. Some steps were taken by Mihalcea et al. (2007) and Banea et al. (2008), who translated an English lexicon and the MPQA corpus to Romanian and Spanish, but for the much simpler task of sentence-level subjectivity analysis. Cross-lingual sentiment classification was addressed by Wan (2009), Prettenhofer and Stein (2010) and Wei and Pal (2010) at document level, and by Lu et al. (2011) at sentence level. Recently, Gui et al. (2013) applied projection learning for opinion mining in Chinese. However, this work only addresses agent detection and requires translating the MPQA corpus. While all these works are relevant, none addresses fine-grained opinion mining in its full generality, where the goal is to predict full opinion frames. 3 Dependency-Based Opinion Mining This work addresses various elements of subjectivity annotated in the MPQA corpus, namely: • direct-subjective expressions (henceforth, opinions) that are direct mentions of a private state, e.g. opinions, beliefs, emotions, sentiments, speculations, goals, etc.; • the opinion agent, i.e., the holder of the opinion; • the opinion target, i.e., what is being argued about; • the opinion polarity, i.e., the sentiment (positive, negative or neutral) towards the target. As an example, consider the sentence in Figure 1, which has two opinions, expressed by the 4NTCIR-8 had a cross-lingual track but in a very different sense: there, queries and documents are in different languages; in contrast, we transfer a model accross languages. 409 spans “is believed” (O1) and “are against” (O2). The first opinion has an implicit agent and a neutral polarity toward the target “the rich elites” (T1). This target is also the agent (A2) of the second opinion, which has a negative polarity toward “Hugo Ch´avez” (T2). 3.1 Motivation As noted in prior work (Choi et al., 2005; Kim and Hovy, 2006; Johansson and Moschitti, 2010), one source of difficulty when learning opinion miners on MPQA is with the boundaries of the entity spans. The fact that no criterion for choosing these boundaries is explicitly defined in the annotation guidelines (Wiebe et al., 2005) leads to a low inter-annotator agreement. To circumvent this problem and make the learning task easier, we depart from the classical span-based approaches toward dependency-based opinion mining. This decision is inspired by the success of dependency models for syntax and semantics (Buchholz and Marsi, 2006; Surdeanu et al., 2008). These dependency relations can be further converted to opinion spans (as described in §3.3), or directly used as features in downstream applications. As we will see, a compact representation based on dependencies can achieve state-of-the-art results and has the advantage of being easily transferred to other languages through a parallel corpus. 3.2 Dependency Graph Figure 1 depicts a sentence-level dependency representation for fine-grained opinion mining. The overall structure is a graph whose nodes are head words (plus two special nodes, root and null), connected by labeled arcs, as outlined below. Determining head nodes. The three opinion elements that we want to detect (opinions, agents and targets) are each represented by a head node, which corresponds to a single word (underlined in Figure 1). When converting the MPQA corpus to dependencies, we determine this “representative” word automatically, by using the following simple heuristic: we first parse the sentence using the Stanford dependency parser (Socher et al., 2013a); then, we pick the last word in the span whose syntactic parent is outside the span (if the span is a syntactic phrase, there is only one word whose parent is outside the span, which is the lexical head). The same heuristic has been used for identifying the heads of mention spans in coreference resolution (Durrett and Klein, 2013). Defining labeled arcs. The opinion relations are represented as labeled arcs that link these head nodes. Two artificial nodes are added: a root node, which links to all nodes that represent opinion words, with the label OPINION; and a null node, which is used for representing implicit relations. To represent opinion-agent relations, we draw an arc labeled AGENT toward the agent word. For opinion-target relations, the arc is toward the target word and has one of the labels TARGET:0, TARGET:+, or TARGET:-; this encodes the polarity in addition to the type of relation. We also include implicit arcs for opinion elements whose agent or target is not mentioned inside the sentence—these are modeled as arcs pointing to the null node. Dependency opinion graph. We have the following requirements for a well-formed dependency opinion graph: 1. No self-arcs or arcs linking root to null. 2. An arc is labeled as OPINION if and only if it comes from the root node. 3. Arcs labeled as AGENT or TARGET must come from an opinion node (i.e., a node with an incoming OPINION arc). 4. Every opinion node has exactly one AGENT and one TARGET outgoing arcs (possibly implicit).5 Similarly to prior work (Choi and Cardie, 2010; Johansson and Moschitti, 2011; Johansson and Moschitti, 2013), we map the MPQA’s polarityinto three levels: positive, negative and neutral, where the latter includes spans without polarity annotation or annotated as “both”. As in Johansson and Moschitti (2013), we also ignore the “uncertain” aspect of the annotated polarities. 3.3 Dependency-to-Span Conversion To evaluate the opinion miner against manual annotations and compare with other systems, we need a procedure to convert back from predicted dependencies to spans. In this work, we used a very simple procedure that we next describe, 5Even though this assumption is not always met in practice, it is typical in MPQA (only 10% of the opinions have multiple agents, typically coreferent; and only 13% have multiple targets). When multiple agents or targets exist, we keep the ones that are closest to the opinion expression. 410 Figure 1: Example of an opinion mining graph in our dependency formalism. Heads are underlined. which assumes the sentence was previously parsed using a syntactic dependency parser. To generate agent and target spans, we compute the largest span, containing the head word, whose words are all descendants in the dependency parse tree and that are, simultaneously, not punctuations. To generate opinion spans, we start with the head word and expand the span by adding all neighbouring verbal words. In the case of English, we also allow adverbs, adjectives, modal verbs and the word to, when expanding to the left. The application of this simple approach to the gold dependency graphs in the training partition of the MPQA leads to oracle F1 scores of 86.0%, 95.8% and 93.0% in the reconstruction of opinion, agent and target spans, respectively, according to the proportional scores described in §5.2. 4 Arc-Factored Model One of the advantages of the dependency representation is that we can easily decode opinion-agenttarget relations without the need of complicated constrained sequence models or integer programming, as done in prior work (Choi et al., 2006; Yang and Cardie, 2012; Yang and Cardie, 2013). 4.1 Decoding We model dependency-based opinion mining as a structured classification problem. Let x be a sentence and y ∈Y(x) a set of well-formed dependency graphs, according to the constraints stated in §3. We define a score function that decomposes as a sum of labeled arc scores, f(x, y) = X a∈y fa(x, ya) (1) where ya is a labeled arc and the sum is over the arcs of the graph y. We use a linear model with weight vector w and local features φa(x, ya): fa(x, ya) = w · φa(x, ya). (2) For making predictions, we need to compute by = arg max y∈Y(x) f(x, y). (3) Under the assumptions stated in §3, this problem decouples into independent maximization problems (one for each possible opinion word in the sentence). The detailed procedure is as follows, where arcs a can take the form o →h (opinion to agent) and o →t (opinion to target). For every candidate opinion word o: 1. Obtain the most compatible agent word, bh := arg maxh fo→h(x, AGENT); 2. Obtain the best target word and its polarity, (bt, bp) := arg maxt,p fo→t(x, TARGET:p); 3. Compute the total score of this candidate opinion as so := froot→o(x, OPINION) + fo→bh(x, AGENT) + fo→bt(x, TARGET:bp). Then, if so ≥0, add the arcs root →o, o →bh, and o →bt to the dependency graph, respectively with labels OPINION, AGENT, and TARGET:bp. For a sentence with L words, this decoding procedure takes O(L2) time. In practice, we speed up this process by pruning from the candidate list arcs whose connected POS were not observed in the training set and whose length were larger than the ones observed in the training set. 4.2 Features We now describe our features φa, which are computed after processing the sentence to predict POS tags, syntactic dependency trees, lemmas and voice (active or passive) information. For English, we used the Stanford dependency parser (Socher et al., 2013a) for the syntactic annotations, the Porter stemmer to compute word stems, and a set of rules for computing the voice of each word. Our Portuguese corpus include all these preprocessing elements (§6.3), with the exception of the voice information (features depending on voice were only used for English). We also used the Subjectivity Lexicon6 of Wilson et al. (2005) that we translated to Portuguese 6http://mpqa.cs.pitt.edu/lexicons/ subj_lexicon/ 411 (§6.3), and a set of negation words (e.g. not, never, nor) and quantity words (e.g. very, much, less) collected for both languages. Our arc-factored features are described below; they are inspired by prior work on dependency parsing (Martins et al., 2013) and fine-grained opinion mining (Breck et al., 2007; Johansson and Moschitti, 2013). Opinion features. We define a set of features that only look at the opinion word; special symbols are used if the opinion is connected to a root or null node. The features below are also conjoined with the arc label. • OPINION WORD. The word itself, the lemma, the POS, and the voice. Conjunction of the word with the POS, and of the lemma with the POS. • BIGRAMS. Bigrams of words and POS corresponding to the opinion word conjoined with its previous (and next) word. • LEXICON (BASIC). Conjunction of the strength and polarity of the opinion word in the Subjectivity Lexicon6 (e.g., “weaksubj+neg”). • LEXICON (COUNT). Number of subjective words (total, positive and negative) in a sentence, with and without being conjoined with the polarity of the opinion word in the lexicon. • LEXICON (CONTEXT). For each word that is in the lexicon and within the 4-word context of the opinion, the form and the polarity of that word in the lexicon, with and without being conjoined with the form and the polarity in the lexicon of the opinion word. Besides the 4-word context, we also used the next/previous word in the sentence which is in the lexicon. • NEGATION AND QUANTITY WORDS. Within the 4-word context, features indicating if a word is a negation or quantity word, conjoined with the word itself and the opinion word. • SYNTACTIC PATH. The number of words up to the top of the syntactic dependency tree, and the sequence of POS tags in that path. Opinion-Argument features. In case of arcs that neither connect to null nor root, the features above are also conjoined with the binned distance between the two words.For these arcs, we did not use the LEXICON (COUNT)/(CONTEXT) features, but we added features regarding the pair of opinion-argument words (below). • OPINION-ARGUMENT WORD PAIR. Several conjunctions of word form, POS, voice and syntactic dependency relations corresponding to the pair opinion-argument. • OPINION-ARGUMENT SYNTACTIC PATH. The syntactic path from the opinion word to the argument, conjoined with the POS and the dependency relations in the path (in Figure 1, for the agent “elites” headed by “are” with relation nsuj, we have: “VBP↓NNS” and “nsuj↓”). For arcs that neither connect to null or root, we conjoin voice features with the label, distance, and the direction of the arc. For these arcs, we also include back-off features where the polarity information is removed from the (target) labels. 5 English Monolingual Experiments In a first set of experiments, we evaluated the performance of our dependency-based model for opinion mining (§3) in the MPQA English corpus. 5.1 Learning We trained arc-factored models by running 25 epochs of max-loss MIRA (Crammer et al., 2006). Our cost function takes into account mismatches between predicted and gold dependencies, with a cost CP on labeled arcs incorrectly predicted (false positives) and a cost CR = 1 −CP on missed gold labeled arcs (false negatives). The cost CP , the regularization constant, and the number of epochs were tuned in the development set. 5.2 Evaluation Metrics Opinion spans (Op.) are evaluated with F1 scores, according to two matching criteria commonly used in the literature: overlap matching (OM), where a predicted span is counted as correct if it overlaps a gold one, and proportional matching (PM), proposed by Johansson and Moschitti (2010). For the latter, we use the following formula for the recall, where we consider the sets of gold (G) and predicted (P) opinion spans:7 R(G, P) = X p∈P max g∈G |g T p|/|p| |P| ; (4) 7This metric is slightly different from the PM metric of Johansson and Moschitti (2010), in which recall was computed as R(G, P) = P p∈P P g∈G |g∩p|/|p| |P| . The reason why we replace the “sum” by a “max” is that each predicted span p in (4) could contribute to the recall with a value greater than 1. Since most of the predicted spans only overlap a single gold span, this fix has a very small effect in the final scores. 412 the precision is P(G, P) = R(P, G). We also report metrics based on a head matching (HM) criterion, where a predicted span is considered correct if its syntactic head matches the head of the gold span. We consider that a pair opinion-agent (Op-Ag.) or opinion-target (Op-Tg.) is correctly extracted according to the OM or the HM criteria, if both the elements satisfy these criteria and the relation holds in the gold data. We also compute the metric described in Johansson and Moschitti (2010) which measures how well agents of opinions are predicted based on a proportional matching (PM) criterion. This metric is applied to evaluate the extraction of both agents and targets. Finally, to evaluate the opinions’ polarities (Op-Pol. metric) we consider as correct opinions where the span and polarity both match the gold ones. 5.3 Results: Dependency-Based Model We assess the quality of our monolingual dependency-based model by comparing it to the recent state-of-the-art approach of Johansson and Moschitti (2013), whose code is available online.8 That paper reports the performance of a basic span-based pipeline system (which extracts opinions with a CRF, followed by two separate classifiers to detect polarities and agents), and of a more sophisticated system that applies a reranking procedure to account for more complex features that consider interactions accross opinion elements. We ran experiments using the same data and MPQA partitions as Johansson and Moschitti (2013). However, since our system is designed for predicting opinion, agents and targets together, we removed the documents that were not annotated with targets. The final train/development/test sets have a total of 6,774/1,404/2,559 sentences and 3,834/881/1,426 opinions, respectively. Table 1 reports the results; since the systems of Johansson and Moschitti (2013) do not predict targets, Table 1 omits target scores.9 We observe that our dependency-based system achieves results competitive with the best results of Johansson and Moschitti (2013) and clearly above the ones reached by their basic system that does not use re-ranking features. Though the two systems are not fully comparable,10 the results in Table 1 8http://demo.spraakdata.gu.se/richard/ unitn_opinion/details.html 9We will report target scores later in §7. 10Our system makes use of target annotations to predict the opinion frames, while Johansson and Moschitti (2013) show that our dependency-based approach (§3.2) followed by a simple dependency-to-span conversion (§3.3) is, despite its simplicity, on par with a top-performing opinion mining system. We conjecture that this is due to the ability to extract opinions, agents, and targets jointly using exact decoding. Note that our proposed dependency scheme would also be able to include additional global features relating pairs of opinions (by adding scores to pairs of opinion arcs) or two opinions having the same agent (by adding scores to pairs of agent arcs sharing its argument), similar to the reranking features used by Johansson and Moschitti (2013). Similar second-order scores have been used in syntactic and semantic dependency parsing (Martins et al., 2013; Martins and Almeida, 2014), but with an increase in the complexity of the model and of the decoder. 6 Cross-Lingual Opinion Mining We now turn to the problem of learning a opinion mining system for a resource-poor language (Portuguese), in a cross-lingual manner. We use a bitext projection approach (§6.1), whose only requirements are a model for a resource-rich language (English) and parallel data (§6.2). 6.1 Bitext Projection Our methodology is outlined as Algorithm 1. For simplicity, we call the source and target languages English (e) and “foreign” (f), respectively. The procedure is inspired by the idea of bitext projection (Yarowsky and Ngai, 2001). We start by training an English system on the labeled data Le (line 1), which in our case is the MPQA v.2.0 corpus. This system is then used to label the English side of the parallel data, automatically identifying opinion frames (line 2). The next step is to run a word aligner on the parallel data (line 3). The automatic alignments are then used to project the opinion frames to the target language (along with some filtering), yielding an automatic corpus bD(f) (line 4), which finally serves to train a system for the target language (line 5). 6.2 Parallel Data We use an English-Portuguese parallel corpus based on the scientific news Brazilian magazine Revista Pesquisa FAPESP, collected by Aziz and has access not only to direct subjective spans but also to subjective expressions annotations with their agents and polarity information. 413 JM13, BASIC JM13, RERANKING OUR SYSTEM HM PM OM HM PM OM HM PM OM Op. 56.3 56.2 60.6 58.6 59.2 63.7 61.6* 59.8 65.1 Op-Ag. 40.3 47.1 44.9 42.4 51.4 48.1 45.7* 51.4 50.3* Op-Tg. 31.3* 48.3* 48.3* Op-Pol. 46.1 45.9 49.3 48.5 48.9* 52.5 47.9 47.0 50.7 Table 1: Method comparison: F1 scores obtained in the MPQA corpus, for our dependency based method and the approaches in Johansson and Moschitti (2013), with and without reranking. The symbol * indicates that the best system beats the other systems with statistical significance, with p < 0.05 and according to a bootstrap resampling test (Koehn, 2004). Figure 2: Excerpt of a bitext document from FAPESP, with automatic opinion dependencies. The annotations are directly projected to Portuguese via automatic word alignments. Algorithm 1 Cross-Lingual Opinion Mining Input: Labeled data Le, parallel data De and Df. Output: Target opinion mining system Sf. 1: Se ←LEARNOPINIONMINER(Le) 2: bDe ←RUNOPINIONMINER(Se, De) 3: De↔f ←RUNWORDALIGNER(De, Df) 4: bDf ←PROJECTANDFILTER(De↔f, bDe) 5: Sf ←LEARNOPINIONMINER( bDf) Specia (2011). Though this corpus is in Brazilian Portuguese (while our validation corpus is in European Portuguese), we preferred FAPESP over other commonly used parallel corpora (such as the Europarl and UN datasets), since it is closer to our newswire target domain, with a smaller prominence of direct speech. We computed word alignments using the Berkeley aligner (Liang et al., 2006), intersected them and filtered out all the alignments whose confidence is below 0.95. After annotating the English side of FAPESP with the pre-trained system ( bDe in Algorithm 1, with a total of 166,719 sentences and 81,492 opinions), the high confidence alignments (De↔f) are used to project the annotations to the Portuguese side of the corpus. The automatic annotations produced by our dependency-based system are easily transferred at a word level (for words with high confidence alignments), as illustrated in Figure 2. To improve the quality of the resulting corpus, we excluded sentences whose alignments cover less than 70% of the words in the target side of the corpus, or sentences whose opinion elements were not fully projected through high confidence alignments. At this point, we obtain an automatically annotated corpus in Portuguese ( bDf), with 106,064 sentences and 32,817 opinions. 6.3 Portuguese Opinion Mining Corpus For validation purposes, we also created a Portuguese corpus with manually annotated finegrained opinions. The corpus consists of a subset of the documents of the Priberam Compressive Summarization Corpus11 (Almeida et al., 2014), which contains 80 news topics with 10 documents each, collected from several Portuguese newspapers, TV and radio websites in the biennia 2010– 2011 and 2012–2013. In the scope of the current work, we selected and annotated one document of each of the 80 topics. The first biennium was selected as the test set and the second biennium was split into development and training sets (see Ta11http://labs.priberam.com/Resources/ PCSC 414 ble 2 for statistics). #doc. #sent. #opin. Train 20 441 240 Dev 20 225 197 Test 40 560 391 Table 2: Number of documents, sentences and opinions in the Portuguese Corpus. HM PM OM Op. 77.0 76.7 79.2 Op-Ag. 69.1 72.3 73.5 Op-Tg. 61.9 65.4 71.4 Op-Pol. 49.4 49.1 50.7 Table 3: Inter-annotator agreement in the test partition (shown are F1 scores). The corpus was annotated in a similar vein as the MPQA (Wiebe et al., 2005), with the addition of the head node for each element of the opinion frame. It includes spans for direct-subjective expressions with intensity and polarity information; agent spans; and target spans. The annotation was carried out by three linguists, after reading the MPQA annotation guidelines (Wiebe et al., 2005; Wilson, 2008) and having a small practice period using the provided examples and some MPQA annotated sentences. Each document was annotated by two of the three linguists and then revised by the third linguist, who (in case of any doubts) discussed with the initial annotators to reach for the final consensus. Scores for inter-annotator agreement are shown in Table 3. The corpus was annotated with automatic POS tags and dependency parse trees using TurboParser (Martins et al., 2013).12 We used an in-house lemmatizer to obtain lemmas for each inflected word in the corpus. A Portuguese lexicon of subjectivity was created by translating the words in the Subjectivity Lexicon of Wilson et al. (2005). The annotated corpus and the translated subjectivity lexicon are available at http://labs.priberam.com/ Resources/Fine-Grained-Opinion-Corpus, and http://labs.priberam.com/Resources/ Subjectivity-Lexicon-PT, respectively. 12http://www.ark.cs.cmu.edu/TurboParser OUR SYSTEM DELEXICALIZED HM PM OM HM PM OM Op. 65.7 63.5 69.8 50.1 45.8 52.7 Op-Ag. 47.6 48.8 51.1 33.8 34.8 35.7 Op-Tg. 34.9 44.8 50.3 19.9 28.0 32.1 Op-Pol. 51.5 50.2 54.4 36.7 34.7 38.8 Table 4: F1 scores obtained in English (MPQA), for our full system and the DELEXICALIZED one. 7 Cross-Lingual Experiments In a final set of experiments, we compare three systems of fine-grained opinion mining for Portuguese. All were trained as described in §5.1. 7.1 System Description Baseline #1: Supervised System. A SUPERVISED system was trained on the small Portuguese training set described in §6.3. Though being a small training corpus, this is, to the best of our knowledge, the only existing corpus with finegrained opinions in Portuguese. We used the same arc-factored model and features described in §4. Baseline #2: Delexicalized System with Bilingual Embeddings. This baseline consists of a direct model transfer: a DELEXICALIZED system is trained in the source language, without language specific features, so that it can be directly applied to the target language. Despite its simplicity, this strategy managed to provide a fairly strong baseline in several NLP tasks (Zeman and Resnik, 2008; McDonald et al., 2011; Søgaard, 2011). To achieve a unified feature representation, we mapped all language-specific POS tags to universal tags (Petrov et al., 2012), and removed all features depending on the dependency relations, but maintained those depending on the syntactic path (but not on the dependency relations themselves). In addition, we replaced the lexical features by 128-dimensional cross-lingual word embeddings.13 To obtain these bilingual neural embeddings, we ran the method of Hermann and Blunsom (2014) on the parallel data (§6.1). We scaled the embeddings by a factor of 2.0 (selected on the dev-set), following the procedure described in Turian et al. (2010). We trained the English delexicalized system on the MPQA corpus, using the same test documents 13A delexicalized system trained without the word embeddings had a worse performance. 415 BASELINE #1 (SUP.) BASELINE #2 (DELEX.) BITEXT PROJECTION HM PM OM HM PM OM HM PM OM Op. 49.4 48.7 50.8 33.1 32.1 34.3 58.0* 55.7* 58.0* Op-Ag. 23.5 27.2 31.5 14.3 18.8 20.0 30.8* 31.2* 36.2* Op-Tg. 23.0 24.9 30.6 11.0 15.7 19.0 29.4* 29.4* 35.6* Op-Pol. 24.1 23.8 24.7 16.6 16.4 17.6 35.7* 34.1* 35.7* Table 5: Comparison of cross-lingual approaches. F1 scores obtained in our Portuguese validation corpus using: a SUPERVISED system trained on the small available data, a DELEXICALIZED system trained with universal POS tags and multilingual embeddings and our BITEXT PROJECTION OF DEPENDENCIES. The symbol * indicates that the best system beats the other systems with statistical significance, with p < 0.05 and according to a bootstrap resampling test (Koehn, 2004). as Riloff and Wiebe (2003) and whose list is available with the corpus, but selecting only documents annotated with targets. We randomly split the remaining documents into train and development sets, respectively with a total of 6,471 and 782 sentences.14 Table 4 shows the performance of the delexicalized baseline in English, compared with a lexicalized system. We will see how this model behaves in a cross-lingual setting in §7.2. Our System: Bitext Projection of Opinion Dependencies. Finally, we implemented our crosslingual BITEXT approach (§6). We trained the (lexicalized) English model on the MPQA corpus (the performance of this model is shown in Table 4). Then, we ran this model on the English side of the parallel corpus, generating automatic annotations, and projected these annotations to the Portuguese side, as described in §6.2. Finally, a Portuguese model was trained on these projected annotations using the arc-factored model and features described in §4. 7.2 Comparison Table 5 shows the F1 scores obtained by the three systems on the Portuguese test partition. We observe that the BITEXT approach outperformed the SUPERVISED and the DELEXICALIZED ones in all metrics with a considerable margin, which shows the effectiveness of our proposed method. The SUPERVISED system suffers from the fact that the training set is too small to allow good generalization; the bitext projection method, in contrast, can create arbitrarily large training corpora without any annotation effort. The performance of 14Note that this split is different from the one we used in §5. There we used the same split as Johansson and Moschitti (2013), for a fair comparison with their system; here, we follow the standard MPQA test partition. the DELEXICALIZED system is rather disappointing. This result is justified by a decrease of performance in English due to the delexicalization (cf. Table 4), followed by an extra loss of quality due to language differences. Though our BITEXT approach scores the best, the scores are behind the range of values obtained for English (Table 4), and far from the interannotator agreement numbers (Table 3), suggesting room for improvement. The polarity scores in Table 5 appear to be relatively low. This fact is probably be justified with the annotator agreement scores (Table 3) which are considerably lower for these metrics. 8 Conclusions We presented a cross-lingual framework for finegrained opinion mining. We used a bitext projection technique to transfer dependency-based opinion frames from English to Portuguese. Experimentally, our dependency model achieved state-of-the-art results for English, and the Portuguese system trained with bitext projection outperformed two baselines: a supervised system trained on a small dataset, and a delexicalized model with bilingual word embeddings. 9 Acknowledgements We would like to thank the anonymous reviewers for their insightful comments, and Richard Johansson for sharing his code and for answering several questions. This work was partially supported by the EU/FEDER programme, QREN/POR Lisboa (Portugal), under the Intelligo project (contract 2012/24803) and by a FCT grants UID/EEA/50008/2013 and PTDC/EEISII/2312/2012. 416 References Cecilia Ovesdotter Alm, Dan Roth, and Richard Sproat. 2005. Emotions from text: machine learning for textbased emotion prediction. In EMNLP. Miguel B. Almeida, Mariana S. C. Almeida, Andr´e F. T. Martins, Helena Figueira, Pedro Mendes, and Cl´audia Pinto. 2014. Priberam compressive summarization corpus: A new multi-document summarization corpus for european portuguese. In LREC. Wilker Aziz and Lucia Specia. 2011. Fully automatic compilation of a Portuguese-English parallel corpus for statistical machine translation. In STIL. Krisztian Balog, Gilad Mishne, and Maarten de Rijke. 2006. Why are they excited?: Identifying and explaining spikes in blog mood levels. In EACL. Carmen Banea, Rada Mihalcea, Janyce Wiebe, and Samer Hassan. 2008. Multilingual subjectivity analysis using machine translation. In EMNLP. Eric Breck, Yejin Choi, and Claire Cardie. 2007. Identifying expressions of opinion in context. In IJCAI. Sabine Buchholz and Erwin Marsi. 2006. CoNLL-X shared task on multilingual dependency parsing. In CoNLL. Yejin Choi and Claire Cardie. 2010. Hierarchical sequential learning for extracting opinions and their attributes. In ACL. Yejin Choi, Claire Cardie, Ellen Riloff, and Siddharth Patwardhan. 2005. Identifying sources of opinions with conditional random fields and extraction patterns. In EMNLP. Yejin Choi, Eric Breck, and Claire Cardie. 2006. Joint extraction of entities and relations for opinion recognition. In EMNLP. Simon Clematide, Stefan Gindl, Manfred Klenner, Stefanos Petrakis, Robert Remus, Josef Ruppenhofer, Ulli Waltinger, and Michael Wiegand. 2012. MLSA A Multilayered Reference Corpus for German Sentiment Analysis. In LREC. Koby Crammer, Ofer Dekel, Joseph Keshet, Shai ShalevShwartz, and Yoram Singer. 2006. Online PassiveAggressive Algorithms. Journal of Machine Learning Research. Dipankar Das and Sivaji Bandyopadhyay. 2010. Labeling emotion in bengali blog corpus a fine grained tagging at sentence level. In (ALR8), COLING. Xiaowen Ding, Bing Liu, and Philip S Yu. 2008. A holistic lexicon-based approach to opinion mining. In WSDM. Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In EMNLP. Lin Gui, Ruifeng Xu, Jun Xu, and Chenxiang Liu. 2013. A cross-lingual approach for opinion holder extraction. Journal of Computational Information Systems, 9(6). Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual Models for Compositional Distributional Semantics. In ACL. Minqing Hu and Bing Liu. 2004. Mining opinion features in customer reviews. In AAAI. Rebecca Hwa, Philip Resnik, Amy Weinberg, Clara Cabezas, and Okan Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural Language Engineering, 11(3). Ozan ˙Irsoy and Claire Cardie. 2014. Opinion mining with deep recurrent neural networks. In EMNLP. Richard Johansson and Alessandro Moschitti. 2010. Reranking models in fine-grained opinion analysis. In COLING. Richard Johansson and Alessandro Moschitti. 2011. Extracting opinion expressions and their polarities: exploration of pipelines and joint models. In ACL. Richard Johansson and Alessandro Moschitti. 2013. Relational features in fine-grained opinion analysis. Computational Linguistics, 39(3). Soo-Min Kim and Eduard Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In SST. P. Koehn. 2004. Statistical signicance tests for machine translation evaluation. In ACL. Lun-Wei Ku, Yu-Ting Liang, and Hsin-Hsi Chen. 2006. Opinion extraction, summarization and tracking in news and blog corpora. In AAAI. Percy Liang, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In NAACL. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1). Bin Lu, Chenhao Tan, Claire Cardie, and Benjamin K. Tsou. 2011. Joint bilingual sentiment classification with unlabeled parallel corpora. In ACL. Andr´e F. T. Martins and M. S. C. Almeida. 2014. Priberam: A turbo semantic parser with second order features. In SemEval. Andr´e F. T. Martins, Miguel B. Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In ACL. Andr´e F. T. Martins. 2015. Transferring coreference resolvers with posterior regularization. In ACL. Ryan McDonald, Slav Petrov, and Keith Hall. 2011. Multisource transfer of delexicalized dependency parsers. In EMNLP. Rada Mihalcea, Carmen Banea, and Janyce Wiebe. 2007. Learning multilingual subjective language via crosslingual projections. In ACL. Sebastian Pad´o and Mirella Lapata. 2009. Cross-lingual annotation projection for semantic roles. Journal of Artificial Intelligence Research, 36(1). Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2). Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In EMNLP. 417 Slav Petrov, Dipanjan Das, and Ryan McDonald. 2012. A universal part-of-speech tagset. In LREC. Peter Prettenhofer and Benno Stein. 2010. Cross-language text classification using structural correspondence learning. In ACL. Ellen Riloff and Janyce Wiebe. 2003. Learning extraction patterns for subjective expressions. In EMNLP. Yohei Seki, David Kirk Evans, Lun-Wei Ku, Hsin-Hsi Chen, Noriko Kando, and Chin-Yew Lin. 2007. Overview of opinion analysis pilot task at NTCIR-6. In NTCIR-6. Yohei Seki, Lun-Wei Ku, Le Sun, Hsin-Hsi Chen, and Noriko Kando. 2010. Overview of opinion analysis pilot task at NTCIR-8: A Step Toward Cross Lingual Opinion Analysis. In NTCIR-8. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013a. Parsing with compositional vector grammars. In ACL. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP. Anders Søgaard. 2011. Data point selection for crosslanguage adaptation of dependency parsers. In ACL. Mihai Surdeanu, Richard Johansson, Adam Meyers, Lu´ıs M`arquez, and Joakim Nivre. 2008. The CoNLL-2008 Shared Task on Joint Parsing of Syntactic and Semantic Dependencies. In CoNLL. Oscar T¨ackstr¨om, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tagging. Trans. of the Association for Computational Linguistics. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semisupervised learning. In ACL. Peter D. Turney. 2002. Thumbs up or thumbs down?: Semantic orientation applied to unsupervised classification of reviews. In ACL. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In ACL. Mengqiu Wang and Chris Manning. 2014. Cross-lingual projected expectation regularization for weakly supervised learning. Trans. of the Association for Computational Linguistics, 2. Bin Wei and Christopher Pal. 2010. Cross lingual adaptation: an experiment on sentiment classifications. In ACL. Janyce Wiebe, Theresa Wilson, and Claire Cardie. 2005. Annotating expressions of opinions and emotions in language. Language Resources and Evaluation, 39(2-3). Theresa Wilson, Janyce Wiebe, and Paul Hoffmann. 2005. Recognizing contextual polarity in phrase-level sentiment analysis. In EMNLP. Theresa Wilson. 2008. Fine-Grained Subjectivity Analysis. Ph.D. thesis, University of Pittsburgh. Yuanbin Wu, Qi Zhang, Xuanjing Huang, and Lide Wu. 2011. Structural opinion mining for graph-based sentiment representation. In EMNLP. Bishan Yang and Claire Cardie. 2012. Extracting opinion expressions with semi-markov conditional random fields. In EMNLP. Bishan Yang and Claire Cardie. 2013. Joint inference for fine-grained opinion extraction. In ACL. Bishan Yang and Claire Cardie. 2014. Joint modeling of opinion expression extraction and attribute classification. Trans. of the Association for Computational Linguistics. David Yarowsky and Grace Ngai. 2001. Inducing multilingual pos taggers and np bracketers via robust projection across aligned corpora. In NAACL. Hong Yu and Vasileios Hatzivassiloglou. 2003. Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In EMNLP. Daniel Zeman and Philip Resnik. 2008. Cross-language parser adaptation between related languages. In IJCNLP. 418
2015
40
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 419–429, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning to Adapt Credible Knowledge in Cross-lingual Sentiment Analysis Qiang Chen∗,†, Wenjie Li†,⋆, Yu Lei†, Xule Liu∗, Yanxiang He∗,‡ ∗School of Computer Science, Wuhan University, China †Department of Computing, The Hong Kong Polytechnic University, Hong Kong ⋆Hong Kong Polytechnic University Shenzhen Research Institute, China ‡The State Key Lab of Software Engineering, Wuhan University, China ∗{qchen, xuleliu, yxhe}@whu.edu.cn †{csqchen, cswjli, csylei}@comp.polyu.edu.hk Abstract Cross-lingual sentiment analysis is a task of identifying sentiment polarities of texts in a low-resource language by using sentiment knowledge in a resource-abundant language. While most existing approaches are driven by transfer learning, their performance does not reach to a promising level due to the transferred errors. In this paper, we propose to integrate into knowledge transfer a knowledge validation model, which aims to prevent the negative influence from the wrong knowledge by distinguishing highly credible knowledge. Experiment results demonstrate the necessity and effectiveness of the model. 1 Introduction With the wide range of business value, sentiment analysis has drawn increasing attention in the past years. The extensive research and development efforts produce a variety of reliable sentiment resources for English, one of the most popular language in the world. These available rich resources become the treasure of knowledge to help conduct or enhance sentiment analysis in the other languages, which is a task known as cross-lingual sentiment analysis (CLSA). In the literature of CSLA, the language with abundant reliable resources is called the source language (e.g., English), while the low-resource language is referred to as the target language (e.g., Chinese). However, in this paper, the situation is a low resource language scenario, where the source language is English, and the target language is Chinese. The main idea of existing CLSA researches is to first build up the connection between the source and target languages to overcome the language barrier, and then develop an appropriate knowledge transfer approach to leverage the annotated data from the source language to train a sentiment classification model in the target language, either supervised or semi-supervised. In particular, these approaches exploit and convert the knowledge learned from the source language to automatically generate and expand the pseudo-training data for the target language. The machine translation (MT) service is one of the most common ways used to build the language connection (Wan, 2008; Banea et al., 2008; Wan, 2009; Wei and Pal, 2010; Gui et al., 2014). Although it is claimed in Duh et al. (2011) that the MT service is ripe for CLSA, the imperfect MT quality hinders existing MTbased CLSA approaches from the further advance. In our preliminary study, we find that even the Google translator1 (i.e., one of the most widely used online MT service (Shankland 2013)) may unavoidably changes the sentiment polarity of the translated text, as illustrated below, with a percentage of around 10%. [Original English Text]: I am at home on bed rest and desperate for something good to read. [Sentiment Label: Negative] [Translated Chinese Text]: ·3[¹K>E Úý"ÀÜéÐw" {Meaning: I am in bed to rest at home and feel that desperate things are also good to read.}[Sentiment Label: Positive] The noisy data generated by MT errors for sure will weaken the contribution of the transferred knowledge and even worse may create conflicting knowledge. While it is a critical step in CLSA to localize the sentiment knowledge learned from the source language in the target language, to the best of our knowledge, hardly any previous research has focused on knowledge validation to filter out the noisy knowledge having sentiment changes caused by wrong translations during knowledge transfer. 1http://translate.google.com 419 To reduce the noisy sentiment knowledge introduced into the target language, we are motivated to validate the knowledge transferred from the source language by checking its linguistic distributions and sentiment polarity consistency with the known knowledge in the target language. Different from previous co-training based approaches where two language views recommend knowledge to each other in the same manner, we consider the source language as the “supervisor” and the target language as the “learner”. The “supervisor” boosts itself with its own accumulated labeled data (called knowledge) and meanwhile recommends its confident knowledge to the “learner”. The “learner” tries to select trustworthy knowledge based on the recommendation to update and expand its training data. Adding a process to efficiently filter out noisy knowledge and retain the self-adaptive and interested new knowledge makes the subsequent boosting process more credible. This is why our approach can outperform state-ofthe-art CLSA approaches. The rest of this paper is organized as follows. Section 2 summarizes the related work. Section 3 explains the proposed model. Section 4 presents experimental results. Finally, Section 5 concludes the paper and suggests future work. 2 Related Work 2.1 Sentiment Analysis Sentiment has been analyzed in different language granularity, e.g., entity, aspect, sentence and document. This paper focuses on sentiment analysis of online product reviews in the document level. Existing approaches are generally categorized into lexicon-based and machine learning based approaches (Liu, 2012). Lexicon-based approaches highly depend on sentiment lexicons. Turney (2002) derives the overall phrase and document sentiment scores by averaging the sentiment scores provided in a lexicon over the words included. Similar idea is adopted in (Hiroshi et al., 2004; Kennedy and Inkpen, 2006). Machine learning based approaches, on the other hand, apply classification models. The task-specific features are designed to train sentiment polarity classifiers. Pang et al. (2002) compare the performance of NB, SVM and ME on movie reviews. SVM is found more effective. Gamon (2004) shows that SVM with deep linguistic features can further improve the performance. A variety of other machine learning approaches are also proposed to sentiment classification (Mullen and Collier, 2004; Read, 2005; Hassan and Radev, 2010; Socher et al., 2013). Cross-domain sentiment classification (CDSC) shares certain common characteristics with crosslingual sentiment classification (CLSC) (Tan et al., 2007; Li et al., 2009; Pan and Yang, 2010; He et al., 2011a; Glorot et al., 2011). Notice that the gap between source domain and target domain is the main difference between CDSC and CLSC. CLSC copes with two different datasets in two different languages. This difference makes CLSC a new challenge, drawing specific attention to researcher recently. 2.2 Cross-lingual Sentiment Analysis There are two alternative solutions to cross-lingual sentiment analysis. One is ensemble learning that combines multiple classifiers. The other is transfer learning that develops strategies to adapt the knowledge from one language to the other. Wan (2008) is among the pioneers to develop the ensemble learning solutions, where multiple classifiers learned from different training datasets including those in original languages and translated languages are combined by voting. Most researches, on the other hand, explore transfer learning and focus on knowledge adaptation. For example, Wan (2009) applies a supervised cotraining framework to iteratively adapt knowledge learned from the two languages by transferring translated texts to each other. Other similar work includes (Wei and Pal, 2010) and (He, 2011b). All these approaches rely on MT to build language connection. Meanwhile, the unlabeled parallel data is also employed to fill the gap between two languages. To solve the feature coverage problem with the EM algorithm, Meng et al. (2012) leverage the unlabeled parallel data to learn unseen sentiment words. Similarly, Popat et al. (2013) use the unlabeled parallel data to cluster features in order to reduce the data sparsity problem. Meng et al. (2012) and Popat et al. (2013) also use the unlabeled parallel data to reduce the negative influence of the noisy and incorrect sentiment labels introduced by machine translation and knowledge transfer. However, the parallel data is also a scarce resource. 420 Some existing transfer learning based CLSA methods have attempted to address the noisy knowledge problem caused by wrong labels by checking label consistency. For example, to filter out the unconfident labels in Chinese, the supervised learning method proposed by (Xu et al., 2011) runs boosting in Chinese by checking consistency between the labels manually annotated in English and predicted by Chinese classifiers on translated Chinese. The work in (Gui et al., 2014) follows the same line although it considers knowledge transferring between two languages. On the contrary, the main focus of our work is to filter out the noisy knowledge having sentiment changes by wrong translations. Actually, both label consistency checking and linguistic distribution checking are important. Any one alone cannot work well. In fact, both of them are considered as the knowledge validation in our work, though the later is our focus. 3 Credible Boosting Model In this paper, we propose a knowledge validation approach to improve the effectiveness of knowledge transfer without directly using extra parallel data. Our target is to filter out the noisy sentiment labels introduced by MT and the incorrect sentiment labels generated by imperfect classifier in the source language. Here, the knowledge is referred to as a collection of distributed document presentations with sentiment labels that have been verified to be robust in sentiment classification (Le and Mikolov, 2014). A novel credible boosting model, namely CredBoost is proposed to apply transfer-supervised learning with an added selfvalidation mechanism to guarantee the knowledge transferred highly credible and self-adaptive. 3.1 Problem Description In a standard cross-lingual sentiment analysis setting, the training data includes labeled English reviews LEN = {(xlen i , yi)}M i=1 and unlabeled Chinese reviews UCN = {xucn j }N j=1, where xk i (k = len or ucn) represents review i and yi ∈ {−1, 1} is the sentiment label of review xl i. The test data is Chinese reviews TCN = {xtcn s }S s=1. We now introduce the unlabeled data into credBoost’s setting. LEN is divided into two disjoint parts LT EN and LB EN, where LT EN for basic training and LB EN for self-boosting. We translate LEN into Chinese to obtain extra labeled Chinese pseudo-reviews LTrCN = {(xlcnT r i , yi)}M i=1 and UCN into English to obtain extra unlabeled English pseudo-reviews UTrEN = {xlenT r j }N j=1. Thereby, we obtain a pair of pseudo-parallel data (UCN, UTrEN). The task is to use LEN and UCN to train a Chinese classifier to predict sentiment polarity for the test data TCN. It is a standard transfer learning problem. We consider two language views, i.e., source language view DS and target language view Dτ. DS boosts itself with the labeled English data and recommend translated knowledge to Dτ, while Dt selects self-adaptive ones to boost itself. 3.2 Framework of CredBoost The CredBoost model involves two synchronously boosting views for two languages respectively. During training, one view acts as a “supervisor” that recommends and passes the knowledge to the other view. The same knowledge is also added into its own view for boosting by automatically updating the weights of the labeled data. The other view acts as a “learner” that receives the recommended knowledge and selects the bestsuited new knowledge to learn. As mentioned before, the knowledge transferred through MT is not reliable. The source language view may also make wrong predictions and thus transfer the wrong knowledge to the target language even the translations are correct. Whether or not the “learner” can benefit from its “supervisor” and how much it benefits highly depends on the credibility and adaptiveness of the recommended knowledge accepted by the “learner”. Knowledge validation is necessary to ensure the quality of learning. The objective of knowledge validation is to identify the new and acquired knowledge from recommendations. Both language views are iteratively trained until learning converges or reaches the iteration upper bound. In the source language view, at iteration (t), the CredBoost model first uses LT(t) EN to train a basic classifier C(t) EN and then uses C(t) EN to predict LB(t) EN and U(t) TrEN. Top m and top n instances are sampled from LB(t) EN and U(t) TrEN respectively, by Formula (1) : O(t) EN = {(xLB i′ , ˆyLB i′ )}men i′=1 TR(t) EN = {(xUT r i , ˆyUT r i )}nen i=1 (1) where O(t) EN denotes the candidates to be added 421 into the training data, and TR(t) EN the knowledge to be recommended to the target language view. We use the source knowledge validation function VS(O(t) EN) to identify the acquired knowledge K(t) ′Ac learned in the previous learning process and the new knowledge K(t) ′Nw fresh to the current knowledge system from O(t) EN. The importance of each training instance is updated according to the performance of prediction by Formula (2) : ω ′Ac i′ =    eϵ(t) · q ν(t) i′ · c(t) i′ if ˆy ′Ac i′ ̸= y ′Ac i′ q ν(t) i′ · c(t) i′ otherwise; ω ′Nw j′ = ( eϵ(t) · log (1 + √e · c(t) j′ ) if ˆy ′Ac j′ ̸= y ′Ac j′ log (1 + √e · c(t) j′ ) otherwise. (2) where c(t) j′ is the confidence of an instance given by C(t) EN, thus log (1 + √e · c(t) j′ ) > 1 is to enhance the weight of new knowledge because of the higher significance contributing to the later learning. ν(t) i′ (< 1) is the adaptiveness score given by the source knowledge validation function VS(O(t) EN). ϵ(t)(> 1) is the error rate of C(t) EN, thus eϵ(t) > 1 is to reward the wrongly predicted data in the next iteration. ˆy ′Ac i′ is the label given by C(t) EN and y ′Ac i′ is the manually annotated label. For the incorrectly predicted instance, the weight is boosted inversely to the performance of the current classifier. The instance identified as the new knowledge which contributes more to performance improvement is given a reward parameter to enhance its significant in the next training iteration. Data sets update by Formula (3). The training starts with iteration (1), the training data is initially set as LT(1) EN = LT EN. LT (t+1) EN = LT (t) EN ∪K(t) ′Ac ∪K(t) ′Nw LB(t+1) EN = LB(t) EN −(K(t) ′Ac ∪K(t) ′Nw) (3) In the target language view, at iteration (t), the CredBoost model receives the recommended knowledge TR(t) EN and projects it to O(t) CN from the unlabeled Chinese data U(t) CN with the pseudoparallel data (U(t) CN, U(t) TrEN). OCN (t) is validated by the target knowledge validation function Vτ(O(t) CN) to identify the acquired knowledge K(t) Ac and the new knowledge K(t) Nw. K(t) Ac and K(t) Nw are projected to K(t) ∗Ac and K(t) ∗Nw from the unlabeled English pseudo-data U(t) TrEN. The weight of an instance is updated by Formula (4), and the parameter setting is similar to that in the source language view. The confidence c(t) i is directly transferred from Ds. We reward the validated knowledge to raise their significance in the training data considering they are originally Chinese. ωAc i = q c(t) i · log(1 + √e · v(t) i ) ωNw j = elog (1+√e·c(t) j ) = 1 + √e · c(t) j (4) We update the data setting by Formula (5). The training data is initially set as UT(1) CN = UT CN. The CredBoost model is illustrated in Algorithm 1. L(t+1) T rCN = L(t) T rCN ∪K(t) Ac ∪K(t) Nw U (t+1) CN = U (t) CN −(K(t) Ac ∪K(t) Nw) U (t+1) T rEN = U (t) T rEN −(K(t) ∗Ac ∪K(t) ∗Nw) (5) Algorithm 1 CredBoost Model Input: English labeled data LT EN and LB EN, translated English unlabeled data UT rEN, translated Chinese data LT rCN and unlabeled Chinese data UCN; Initialize: Weights W (1) EN = {1}M for LT EN and W (1) T rCN = {1}M for LT rCN; For t = 1, · · · , T: 1. Use LT (t) EN to learn English classifier CEN(t); 2. Use C(t) EN to predict LB(t) EN and U (t) T rEN sample top m and top n instances from LB(t) EN and U (t) T rEN, O(t) EN and TR(t) EN; 3. Validate O(t) EN by knowledge validation function VS(O(t) EN) to identify acquired knowledge K(t) ′Ac and new knowledge K(t) ′Nw, generate the weights for them by Formula (2), then recommend TR(t) EN to Dτ; 4. Project TR(t) EN to O(t) CN with pseudo-parallel data (U (t) CN, U (t) T rEN), and use knowledge validation function Vτ(O(t) CN) to identify acquired knowledge K(t) Ac and new knowledge K(t) Nw, then generate weights for them by Formula (4); 5. Update DS by Formula (2) and Dτ by Formula (5); End For. Output: Chinese classifier C(T ) CN. 3.3 Knowledge Validation Knowledge is familiarity, awareness or understanding of someone or something, such as facts, information or skills, which is acquired through experience or education by perceiving, discovering or learning2. It can be implicit or explicit. In machine learning, natural language knowledge is a continuously improving hypothesis that consists of both semantic and significant domain 2Definition from Oxford Dictionary of English, available at: http://oxforddictionaries.com/view/ entry/m_en_us126. 422 characters. While language is the expression of semantic, semantic is the carrier of sentiment. Using another word, two texts with more smaller semantic distance have higher probability to share the same sentiment polarity. Choi and Cardie (2008) assert that the sentiment polarity of natural language can be better inferred by compositional semantics. They also suggest that incorporating compositional semantics into learning can improve the performance of sentiment classifiers. Saif et al. (2012) also demonstrate that the addition of extra semantic features can further improve performance. In order to filter out noisy and incorrect sentiment labels, we propose a knowledge validation approach to reduce these noisy data that hinder the improvement of learning performance. Knowledge validation is a way to identify the acquired knowledge implied in current knowledge system and also the new knowledge fresh to current knowledge system. The knowledge can be represented in the semantic space. (Le and Mikolov, 2014) project documents into a low-dimension semantic space with a deep learning approach, known as document-to-vector (Doc2Vec3). Considering that Dov2Vec has been verified to be efficient in many NLP tasks including sentiment analysis, we follow previous research to represent knowledge embedded in product reviews with the vectors generated by Doc2Vec. Suppose distributed representations (i.e., lowdimensional vectors) of the all reviews including {LT EN, LB EN, UTrEN} and {LTrCN, UCN} are {V(LT EN), V(LB EN), V(UTrEN)} and {V(LTrCN), V(UCN)} respectively. At iteration (t), V(LT(t) EN ) is the current knowledge system of the English view and V(L(t) TrCN) is that of the Chinese. The knowledge validation runs separately in the source and target views. In the target language view, at iteration (t), suppose the prediction confidence of the candidate (xU i , ˆyU i ) ∈ O(t) CN is c(t) i . We define the adaptiveness score as the average distance of top ζ+ semantic distances between the instance xLB i and the positive cluster of L(t) TrCN, denoted as L(t)+ TrCN, and top ζ(t) −= ζ+· L(t) + L(t) − semantic distances between xU i and the negative cluster, denoted as 3Doc2Vec is one of the models implemented in the free python library Gensim which can be freely downloaded at: https://pypi.python.org/pypi/gensim. L(t)− TrCN, where L(t) + and L(t) −are the numbers of the elements in L(t)+ TrCN and L(t)− TrCN respectively. The validation parameters are defined by Formula (6), ωr is the weight of training instance V(r), ν(t) i is the adaptiveness score, and Vlabel ∗ ∈{1, −1} is the validated label which denotes the knowledge belonging to the positive cluster L(t)+ TrCN or the negative cluster L(t)− TrCN. The validation process is illustrated in Algorithm 2, where the acquired knowledge is k(t) Ac, and the new knowledge is k(t) Nw. D(V(xLB i ), V(r)) = V(xLB i ) T · V(r) ∥V(xLB i ) ∥· ∥V(r) ∥ ⇒        ν(t)+ i = 1 ζ+ P r∈L(t)+ EN ωr D(V(xLB i ), V(r)) ν(t)− i = 1 ζ(t) − P r′∈L(t)− EN ωr′ D(V(xLB i ), V(r′)) ⇒ ∆(ν(t) i ) = ν(t)+ i −ν(t)− i ⇒ δ(t) i = 1 e1+∆(ν(t) i ) ⇒ Vlabel ∗ = ( 1 if δ(t) i > 0.5, −1 if δ(t) i ≤0.5. ⇒ν(t) i = ( ν(t)+ i if Vlabel ∗ = 1, ν(t)− i if Vlabel ∗ = −1. (6) where D(V(xLB i ), V(r)) is the Cosine distance between the distributed representations of the two reviews. ν(t)+ i and ν(t)− i are the weighted averages of the semantic distances. δ(t) i is the Sigmoid function which computes the probability that the data is distributed in the positive cluster L(t)+ TrCN. In the source language view, at iteration (t), let’s suppose the prediction confidence of candidate (xLB i′ , ˆyLB i′ ) ∈O(t) EN to be c(t) i′ . The definitions of validation parameters are similar to those in the target language view. The validation process is illustrated in Algorithm 3. The validation is looser, because the training data and candidates are both in English. This differs from it in the target view. 4 Experiments 4.1 Experimental Setup We evaluate the proposed CredBoost model on an open cross-lingual sentiment analysis task in NLP&CC 20134. The data set provided is a 4NLP&CC is an annual conference of Chinese information technology professional committee organized by Chinese computer Federation (CCF). It mainly focuses on the study and application novelty of natural language processing and Chinese computation. CLSA task is the task 3 of NLP&CC 2013. For more details and open 423 Algorithm 2 Knowledge Validation Vτ(Dτ) Input: Labeled Chinese training data L(t) T rCN, weights of labeled data W (t) CN and semantics vectors of all English data for iteration (t): {V(L(t) T rCN), V(U (t) CN)}; Initialize: K(1) ′Ac = φ, K(1) ′Nw = φ; For xU i in O(t) CN: 1. Use L(t) T rCN to train a classifier C(t) CN, then use C(t) CN predict xU i , giving label yCN i ; 2. Get validated label Vlabel ∗ , positive and negative average distances ν(t)+ i , ν(t)− i of xU i by fomula (6); 3. If ν(t)+ i < ψ and ν(t)− i < ψ: If ˆyLB i = Vlabel ∗ : Then K(t) Nw ←K(t) Nw + xU i ; Else: If ˆyLB i = Vlabel ∗ = yCN i : Then K(t) Ac ←K(t) Ac + xU i ; End For. Output: K(t) Nw, K(t) Ac. Algorithm 3 Knowledge Validation VS(DS) Input: Weights of labeled data W (1) EN and semantics vectors of all English data for iteration (t): {V(LT (t) EN ), V(LB(t) EN ), V(U (t) T rEN)}; Initialize: K(1) ′Ac = φ, K(1) ′Nw = φ; For xLB i′ in O(t) EN: 1. Get validated label Vlabel ′ , positive and negative average distances ν(t)+ i′ , ν(t)− i′ of xLB i′ by fomula (6); 2. If ν(t)+ i′ < ψ and ν(t)− i′ < ψ: If ˆyLB i′ = Vlabel ′ : Then K(t) ′Nw ←K(t) ′Nw + xLB i′ ; Else: If ˆyLB i′ = Vlabel ′ : Then K(t) ′Ac ←K(t) ′Ac + xLB i′ ; End For. Output: K(t) ′Nw, K(t) ′Ac. collection of bilingual Amazon product reviews in Books, DVD and Music domains. It contains 4,000 labeled English reviews, 4,000 Chinese test reviews, and 17,814, 47,071, 29,677 unlabeled Chinese reviews in three different domains. We randomly select 2,000 unlabeled Chinese reviews in each domain to train classifiers. Besides, the pseudo-data sets described in CredBoost model are translated with Google translator. The data set is summarized in Table 1. To better illustrate the significance of knowledge validation during knowledge transfer, we compare the proposed method with the following baseline methods: Lexicon-based (LB): The standard English MPQA sentiment lexicons are translated into resource, you can available at: http://tcci.ccf.org. cn/conference/2013/index.html. Domain English Chinese L U L U Books Train 4,000 2,000 Test 4,000 DVD Train 4,000 2,000 Test 4,000 Music Train 4,000 2,000 Test 4,000 Table 1: Experimental data sets. All data sets are balanced, L represents labeled data and U represents unlabeled data. Chinese and then utilized together with a small number of Chinese turning words, negations and intensifiers to predict the sentiment polarities of the Chinese test reviews. Basic SVM (BSVM-CN): The labeled English reviews are translated into Chinese, which are then used as the pseudo-training data to train a Chinese SVM classifier. Primarily boost transfer learning (BTL-1): The labeled English reviews are used to train the English classifier, which is applied to label the English translations of the unlabeled Chinese reviews. These labeled Chinese reviews obtained via MT together with the Chinese translations of the labeled English reviews are then used as the pseudo-training data to train a Chinese sentiment classifier. Best result in NLP&CC 2013 (BR2013): This is the best result reported in NLP&CC 2013. Unfortunately, the specification of the method is not available. Self-boost (SB-CN) in Chinese: The labeled English reviews are translated into Chinese, which are used as the pseudo-training data to train a basic Chinese classifier. This classifier is iteratively refined by choosing the most confidently predicted English reviews to add into the Chinese training data until a predefined iteration number reaches. It can be also considered as a self-adaptive boosting approach. Iteratively boost transfer learning (BTL-2): This is an enhanced transfer learning method sharing the same learning framework with CredBoost but it ignores knowledge validation. It iteratively transfers the knowledge from English to Chinese. The learning in both languages iteratively boosts themselves separately. The transfer size is 16, comparable to that in CredBoost. Basic co-training (CoTr): The co-training method proposed in (Wan, 2009) is implemented. It is bidirectional transfer learning. In each 424 iteration, 10 positive and 10 negative reviews are transferred from one language to the other. Doc2vec feature CredBoost (dCredB): This method is similar to CredBoost except that document-to-vector is used to generate features when training basic classifiers. The vectors are obtained from both original and translated reviews. The dimension of doc2vec is 300, while the other parameters are set as default. The baseline methods described above are categorized into three classes: the first four which are preliminary methods, the middle three which are several state-of-the-art models being comparable to our proposed model, and the last one which is a comparison to suggest that the knowledge representation is not the answer to the performance improvement. For all the methods excluding LB and BR2013, we use support vector machines (SVMs) as basic classifiers. We use the Liblinear package (Fan et al., 2008) with the linear kernel5. All methods use Unigram+Bigram features to train the basic classifiers, except for dCredB. 4.2 Experimental Result In this work, there are two main parameters that may significantly influence the performance of our proposed model. They are the new knowledge validation boundary ψ and the validation scale ζ+ in the training data. We set the values of parameters with the grid search strategy. We first fix initial ζ+ = 14 to search the best new knowledge validation boundary ψ from an empirical value set {0.30, 0.35, 0.40, 0.45, 0.50}. We then fix the best ψ = 0.40 to check the suitable validation scale ζ+ from the initial value set {6, 8, 9, 10, 11, 12, 14, 16} in which values are comparable with the knowledge transfer scale of CoTr in the training data. Besides, the recommendation size m for English is set to 20 and the recommendation size n for Chinese is set to 40. The final settings are listed in Table 2. The performance is evaluated in terms of accuracy (Ac) defined by Formula (7). Ac(f) = pf P f , Avg Ac = 1 3 · X f′ ∈F Ac(f ′) (7) where pf is the number of correct predictions and P f is the total number of the test data; F ∈ {Books, DV D, Music} is the domain set. 5The parameter setting used in this paper is ‘-s 7’. Domain ψ ζ+ m n Books 0.45 12 20 40 DVD 0.40 12 20 40 Music 0.40 9 20 40 Table 2: Parameter settings of three domains in this paper. Approaches Domain Avg Ac Books DVD Music LB 0.7770 0.7832 0.7595 0.7709 BSVM-CN 0.7940 0.7995 0.7778 0.7904 BTL-1 0.8010 0.8058 0.7605 0.7891 BR2013 0.7850 0.7773 0.7513 0.7712 SB-CN 0.8400 0.8428 0.8012 0.8280 BTL-2 0.8105 0.8265 0.7980 0.8117 CoTr 0.8025 0.8508 0.7812 0.8115 dCredB 0.6485 0.6753 0.6700 0.6646 CredBoost 0.8465 0.8518 0.8093 0.8359 Table 3: Macro performance of all approaches in three domains. All values are accuracies and Avg-Ac represents the average accuracy in three domains. The performances are reported in Tables 3 and 4. As shown, CredBoost outperforms all the other comparison methods. The first four baselines have poor performances compared to others. This suggests that the CLSA problem cannot be well solved by directly learning from the labeled translated data without any knowledge adaption or knowledge validation. SB-CN, BTL-2 and CoTr employ iterative boosting to adapt knowledge from the source English to the target Chinese without validating the transferred knowledge. They inevitably mis-recommend the massive noisy data into Chinese. CredBoost, in contrast, introduces knowledge validation into transfer learning with iterative boosting. It better adapts knowledge from English to Chinese and thus ensures the credibility of the accepted knowledge. Its best result justifies our assumption. Specifically, SB-CN leverages both the Chinese training data translated from the labeled English data and the unlabeled Chinese data used for boosting. The boosting in Chinese iteratively selects the trustworthy data with the labels assigned by the Chinese classifier. Our proposed method, however, exploits two different languages simultaneously with an additional boosting step, i.e., it transfers knowledge from English to Chinese during boosting. We then use knowledge validation model to validate the unlabeled Chinese data whose labels are assigned by the English 425 Model (Books) Positive Negative Ac P R F1 P R F1 LB 0.7368 0.8400 0.7850 0.8140 0.7000 0.7527 0.7700 BSVM-CN 0.8249 0.7465 0.7837 0.7685 0.8415 0.8033 0.7940 BTL-1 0.8537 0.7265 0.7850 0.7620 0.8755 0.8148 0.8010 BR2013 0.7850 SB-CN 0.8716 0.7975 0.8329 0.8134 0.8825 0.8465 0.8400 BTL-2 0.7105 0.8881 0.7894 0.9105 0.7588 0.8278 0.8105 CoTr 0.8339 0.7555 0.7928 0.7765 0.8495 0.8114 0.8025 dCredB 0.5310 0.6941 0.6017 0.7660 0.6202 0.6854 0.6485 CredBoost 0.8225 0.8640 0.8427 0.8705 0.8306 0.8501 0.8465 Model (DVD) Positive Negative Ac P R F1 P R F1 LB 0.7648 0.8180 0.7905 0.8044 0.7485 0.7754 0.7832 BSVM-CN 0.7745 0.8450 0.8082 0.8295 0.7540 0.7900 0.7995 BTL-1 0.8282 0.7715 0.7988 0.7861 0.8400 0.8122 0.8058 BR2013 0.7773 SB-CN 0.8853 0.7875 0.8335 0.8086 0.8980 0.8510 0.8428 BTL-2 0.8525 0.8104 0.8309 0.8005 0.8444 0.8219 0.8265 CoTr 0.8374 0.8705 0.8536 0.8652 0.8310 0.8478 0.8508 dCredB 0.6070 0.7030 0.6515 0.7435 0.6542 0.6960 0.6753 CredBoost 0.8440 0.8572 0.8508 0.8595 0.8465 0.8530 0.8518 Model (Music) Positive Negative Ac P R F1 P R F1 LB 0.7387 0.8030 0.7695 0.7842 0.7160 0.7485 0.7595 BSVM-CN 0.8492 0.6755 0.7525 0.7306 0.8800 0.7984 0.7778 BTL-1 0.8437 0.6395 0.7275 0.7097 0.8815 0.7863 0.7605 BR2013 0.7513 SB-CN 0.8787 0.6990 0.7786 0.7501 0.9035 0.8197 0.8012 BTL-2 0.7285 0.8461 0.7829 0.8675 0.7616 0.8111 0.7980 CoTr 0.8536 0.6790 0.7564 0.7335 0.8835 0.8015 0.7812 dCredB 0.5860 0.7043 0.6397 0.7540 0.6455 0.6955 0.6700 CredBoost 0.7258 0.8708 0.7917 0.8928 0.7653 0.8241 0.8093 Table 4: Micro performance of all approaches in three domains. P: Precision, R: Recall, F1: micro-F measure, Ac: Accuracy, and - represents unknown. The model in BR2013 is unknown, thus its micro performance is unavailable. classifier. It is reasonable that a Chinese classifier performs better on Chinese text than an English classifier performs on the translated English text due to the different language distributions and MT errors. However, as shown in Tables 3 and 4, the better performance of our proposed method compared with that of the self-boosting method further suggests the effectiveness of our proposed knowledge validation model. Figure 1 illustrates the continuous changes of performances vs. the corresponding growth sizes of the training data sets for SB-CN, BTL-2, CoTr, and CredBoost. According to our common sense, noisy data have negative influence on performance improvement. Compared to the other three methods, CredBoost accepts less number of training instances during learning while it achieves more improvement. This verifies the ability of CredBoost that can filter out the noisy data recommended by the English sentiment classifier. In Figure 1(a), the curves of BTL-2 and CoTr suggest that directly transferring the knowledge recommended from English imports many noisy data into Chinese. It is also obvious that the performance curve of CredBoost implies a stable improvement trend while the other three decrease after certain iterations because of the accumulated negative influence from the noisy data. Figure 1(b) shows CredBoost accepts decreased training instances after certain iterations because the number of “high-quality” instances decrease when learning proceeds. This finding suggests that knowledge validation would rather abandon “lesscredible” knowledge with higher probability than easily accept it. Knowledge validation in the proposed model guarantees highly-credible learning when transferring knowledge from English to Chinese. The results also show that CredBoost has great potential to achieve better performance approaching to supervised approaches if more unlabeled Chinese data are available. Another interesting finding is also observed. 426 (a) Performances comparison in three domains (b) Growth sizes comparison in three domains 0 20 40 60 80 100 120 0.65 0.7 0.75 0.8 0.85 Performance in Books domain. Iteration Number Accuracy selfBoost CoTr BTL-2 CredBoost 0 20 40 60 80 100 120 0.76 0.77 0.78 0.79 0.8 0.81 0.82 0.83 0.84 0.85 0.86 Performance in DVD domain. Iteration Number Accuracy selfBoost CoTr BTL-2 CredBoost 0 20 40 60 80 100 120 0.6 0.65 0.7 0.75 0.8 Performance in Music domain. Iteration Number Accuracy selfBoost CoTr BTL-2 CredBoost 0 20 40 60 80 100 120 0 5 10 15 20 25 30 Growth Sizes in Books domain. Iteration Number NO. of Instances selfBoost CoTr BTL-2 CredBoost 0 20 40 60 80 100 120 0 5 10 15 20 25 30 Growth Sizes in DVD domain. Iteration Number NO. of Instances selfBoost CoTr BTL-2 CredBoost 0 20 40 60 80 100 120 0 5 10 15 20 25 Growth Sizes in Music domain. Iteration Number NO. of Instances selfBoost CoTr BTL-2 CredBoost Figure 1: Performances vs. Growth Sizes for SB-CN, CoTr, BTL-2, and CredBoost in three domains. The similar performance curves of CoTr is also reported in (Gui et al., 2014). Although document-to-vector represents content semantic well, it cannot determine the sentiment polarity of text well, even when the documentto-vectors that are used to train basic classifiers are learned on the mixture of the translated and original reviews. The superior performance of CredBoost to dCredB suggests that the semantic representation is effective to identify highlycredible acquired knowledge and new knowledge but it alone may not be sufficient enough to model the sentiment information. We also conduct some other experiments to study the sensitivity of the new knowledge validation boundary ψ and the validation scale ζ+ in the training data. The experimental results show that the performances with different parameter settings fluctuate around the best result reported in Tables 3 and 4 in a small range. Our model is basically quite stable. 5 Conclusion In this paper, we propose a semi-supervised learning model, called CredBoost, to address crosslingual (English vs Chinese) sentiment analysis without direct labeled Chinese data nor direct parallel data. We propose to introduce knowledge validation during transfer learning to reduce the noisy data caused by machine translation errors or inevitable mistakes made by the source language sentiment classifier. The experimental result demonstrates the effectiveness of the proposed model. In the future, we will explore more suitable knowledge representations and knowledge validation in the CredBoost framework. Acknowledgements We thank all the anonymous reviewers for their detailed and insightful comments on this paper. The work described in this paper was supported by the Research Grants Council of Hong Kong project (PolyU 5202/12E and PolyU 152094/14E) and the grants from the National Natural Science Foundation of China (61272291, 61472290, 61472291 and 61303115). References Carmen Banea and Rada Mihalcea, Janyce Wiebe, Samer Hassan. 2008. Multilingual Subjectivity Analysis Using Machine Translation. In Proceedings of the 2008 Conference on Empirical Methods in Natual Language Processing, pages 127-135, Honolulu, October. Carmen Banea, Yoonjung Choi, Lingjia Deng, Samer Hassan, Michael Mohler, Bishan Yang, Claire 427 Cardie, Rada Mihalcea, Janyce Wiebe. 2013. CPNCORE: A Text Semantic Similarity System Infused with Opinion Knowledge. In Proceedings of the Main Conference and the SHared Task in *SEM 2013, pages 221-228, Atlanta, Georgia, June 13-14, 2013. Yejin Choi and Claire Cardie. 2008. Learning with Compositional Semantics as Structural Inference for Subsentential Sentiment Analysis. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 792-801, Honolulu, October 2008. Kevin Duh and Akinori Fujino and Masaaki Nagata. 2011. Is Machine Translation Ripe for Crosslingual Sentiment Classification? In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: shortpapers, pages 429433, Portland, Oregon, June 19-24, 2011. Rong-En Fan, Kai-Wei Chang, Cho-Jui Ksieh, XiangRui Wang, Chih-Jen Lin. 2008. LIBLINEAR: A Library for Large Linear Classification. In Journal of Machine Learning Research, 9 (2008) 1871-1874. Micheal Gamon. 2004. Sentiment Classification on Customer Feedback Data: Noisy Data, Large Feature Vectors and the Role of Linguistic Analysis. In Proceedings of the 20th International Conference on Computational Linguistics, pages 841-847, CH. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain Adaptation for Large-scale Sentiment Classification: A Deep Learning Approach. In Proceedings of the 28th International Conference on Machine Learning, pages 513-520, Bellevue, Washington, USA. Lin Gui, Ruifeng Xu, Qin Lu, Jun Xu, Jian Xu, Bin Liu, Xiaolong Wang. 2014. Cross-lingual Opinion Analysis via Negative Transfer Detection. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics (short paper), pages 860-865, Baltimore, Maryland, USA, June 23-25 2014. Ahmed Hassan and Dragomir Radev. 2010. Identifying Text Polarity Using Random Walks. In Proceedings of the 48th Annual Meeting on Association for Computational Linguistics, pages 395-403, Uppsala, Sweden, 11-16 July 2010. Yulan He, Chenghua Lin, Harith Alani. 2011a. Automatically Extracting Polarity-bearing Topics for Cross Domain Sentiment Classification. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Huamn Language Technologies, pages 123-131, Portland, Oregon, USA. Yulan He. 2011b. Latent Sentiment Model for WeaklySupervised Cross-Lingual Sentiment Classification. In Proceedings of the 33th European Conference on Information Retrieval(ECIR 2011), 18-21 Apr 2011, Dublin, Ireland. KANAYAMA Hiroshi, NASUKAWAA Tetsuya, WATANABE Hideo. 2004. Deeper Sentiment Analysis Using Machine Translation Technology. In Proceedings of the 20th International Conference on Computational Linguistics, pages 494-500. Alistair Kennedy and Diana Inkpen. 2006. Sentiment Classification of Movie and Product Reviews Using Contextual Valence Shifters. Computational Intelligence,22(2):110-125. Quoc Le, Tomas Mikolov. 2014. Distributed Representations of Sentences and Documents. In Proceedings of the 31th International Conference on Machine Learning, Beijing, China, 2014. JMLR: W&CP volume 32. Tao Li, Vikas Sindhwani, Chris Ding, and Yi Zhang. 2009. Knowledge Transformation for CrossDomain Sentiment Classification. In Proceedings of the 32th International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 716-717, Boston, MA, USA. Bing Liu. May 2012. Sentiment Analysis and Opinion Mining. Morgan & Claypool Publisher. Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Ge Xu, Houfeng Wang. 2012. Cross-Lingual Mixture Model for Sentiment Classification. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 572-581, Jeju, Republic of Korea, 8-14 July 2012. Tony Mullen and Nigel Collier. 2004. Sentiment analysis using support vector machines with diverse inoformation sources. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 412-418, (July 2004) poster paper. Sinno Jialin Pan and Qiang Yang, Fellow, IEEE. 2010. A Survey on Transfer Learning. In Journal of IEEE Transactions on Knowledge and Data Engineering, Vol.22, NO.10, October 2010. Bo Pang and Lillian Lee, Shivakumar Vaithyanathan. 2002. Thumps Up? Sentiment Classification using Machine Learning Techniques. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing, pages 79-86, Philadelphia, July 2002. Kashyap Popat, Balamurali A R, Pushpak Bhattacharyya and Gholamreza Haffari. 2013. The Haves and the Have-Nots: Leverage Unlabeled Corpora for Sentiment Analysis. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics, pages 412-422, Sofia, Bulgaria, 4-9 August 2013. Jonathon Read. 2005. Using Emotions to reduce Dependency in Machine Learning Techniques for Sentiment Classification. In Proceedings of the 43th Annual Meeting on Association for Computational Linguistics Student Research Workshop, pages 4348. 428 Hassan Saif, Yulan He and Harith Alani. 2012. Semantic Sentiment Analysis of Twitter. In Proceedings of the 11th International Semantics Web Conference ISWC 2012, Boston, USA. Stephen Shankland. 2013. Google Translate now serves 200 millon people daily. In CNET. CBS Interactive Inc. May 18, 2013. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Chiristopher D. Manning, Andrew Y. Ng and Christopher Potts. 2013. Recursive Deep Models for Semantics Computationality Over a Sentiment Treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. Songbo Tan, Gaowei Wu, Huifeng Tang and Xueqi Cheng. 2007. A Novel Scheme for Domain-transfer Problem in the context of Sentiment Analysis. In CIKM 2007, November 6-8, 2007, Lisboa, Portugal. Peter D. Turney. 2002. Thumps Up or Thumps Down? Semantic Orientation Applied to Unsupervised Classification of Reviews. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, pages 417-424, Philadelphia, July 2002. Xiaojun Wan. 2008. Using Bilingual Knowledge and Ensemble Technics for Unsupervised Chinese Sentiment Analysis. In Proceedings of the 2008 Conference on Empirical Methods in Natual Language Processing, pages 553-561, Honolulu, October 2008. Xiaojun Wan. 2009. Co-Training for Cross-Lingual Sentiment Classification. In Proceedings of the 47th Annual Meeting of the ACL and the 4th IJCNLP of the AFNLP, pages 235-243, Suntec, Singapore, 2-7 August 2009. Bin Wei and Christopher Pal. 2010. Cross Lingual Adaptation: An Experiment on Sentiment Classifications. In Proceedings of the 48 Annual Meeting of the Association for Computational Linguistics (short paper), pages 258-262, Uppsala, Sweden, 11-16 July 2010. Ruifeng Xu, Jun Xu and Xiaolong Wang. 2011. Instance Level Transfer Learning for Cross Lingual Opinion Analysis. In Proceedings of the 2nd Workshop on Computational Approaches to Subjectivity and Sentiment Analysis, ACL-HLT 2011, pages 182188, 24 June, 2011, Portland, Oregon, USA. 429
2015
41
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 430–440, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning Bilingual Sentiment Word Embeddings for Cross-language Sentiment Classification Huiwei Zhou, Long Chen, Fulin Shi, and Degen Huang School of Computer Science and Technology Dalian University of Technology, Dalian, P.R. China {zhouhuiwei,huangdg}@dlut.edu.cn {chenlong.415,shi fl}@mail.dlut.edu.cn Abstract The sentiment classification performance relies on high-quality sentiment resources. However, these resources are imbalanced in different languages. Cross-language sentiment classification (CLSC) can leverage the rich resources in one language (source language) for sentiment classification in a resource-scarce language (target language). Bilingual embeddings could eliminate the semantic gap between two languages for CLSC, but ignore the sentiment information of text. This paper proposes an approach to learning bilingual sentiment word embeddings (BSWE) for English-Chinese CLSC. The proposed BSWE incorporate sentiment information of text into bilingual embeddings. Furthermore, we can learn high-quality BSWE by simply employing labeled corpora and their translations, without relying on largescale parallel corpora. Experiments on NLP&CC 2013 CLSC dataset show that our approach outperforms the state-of-theart systems. 1 Introduction Sentiment classification is a task of predicting sentiment polarity of text, which has attracted considerable interest in the NLP field. To date, a number of corpus-based approaches (Pang et al., 2002; Pang and Lee, 2004; Kennedy and Inkpen, 2006) have been developed for sentiment classification. The approaches heavily rely on quality and quantity of the labeled corpora, which are considered as the most valuable resources in sentiment classification task. However, such sentiment resources are imbalanced in different languages. To leverage resources in the source language to improve the sentiment classification performance in the target language, cross-language sentiment classification (CLSC) approaches have been investigated. The traditional CLSC approaches employ machine translation (MT) systems to translate corpora in the source language into the target language, and train the sentiment classifiers in the target language (Banea et al., 2008). Directly employing the translated resources for sentiment classification in the target language is simple and could get acceptable results. However, the gap between the source language and target language inevitably impacts the performance of sentiment classification. To improve the classification accuracy, multiview approaches have been proposed. In these approaches, the resources in the source language and their translations in the target language are both used to train sentiment classifiers in two independent views (Wan, 2009; Gui et al., 2013; Zhou et al., 2014a). The final results are determined by ensemble classifiers in these two views to overcome the weakness of monolingual classifiers. However, learning language-specific classifiers in each view fails to capture the common sentiment information of two languages during training process. With the revival of interest in deep learning (Hinton and Salakhutdinov, 2006), shared deep representations (or embeddings) (Bengio et al., 2013) are employed for CLSC (Chandar A P et al., 2013). Usually, paired sentences from parallel corpora are used to learn word embeddings across languages (Chandar A P et al., 2013; Chandar A P et al., 2014), eliminating the need of MT systems. The learned bilingual embeddings could easily project the training data and test data into a common space, where training and testing are performed. However, high-quality bilingual embeddings rely on the large-scale task-related parallel corpora, which are not always readily available. Meanwhile, though semantic similarities across languages are captured during bilingual embedding learning process, sentiment information of 430 text is ignored. That is, bilingual embeddings learned from unlabeled parallel corpora are not effective enough for CLSC because of a lack of explicit sentiment information. Tang and Wan (2014) first proposed a bilingual sentiment embedding model using the original training data and the corresponding translations through a linear mapping rather than deep learning technique. This paper proposes a denoising autoencoder based approach to learning bilingual sentiment word embeddings (BSWE) for CLSC, which incorporates sentiment polarities of text into the bilingual embeddings. The proposed approach learns BSWE with the original labeled documents and their translations instead of parallel corpora. The BSWE learning process consists of two phases: the unsupervised phase of semantic learning and the supervised phase of sentiment learning. In the unsupervised phase, sentiment words and their negation features are extracted from the source training data and their translations to represent paired documents. These features are used as inputs for a denoising autoencoder to learn the bilingual embeddings. In the supervised phase, sentiment polarity labels of documents are used to guide BSWE learning for incorporating sentiment information into the bilingual embeddings. The learned BSWE are applied to project English training data and Chinese test data into a common space. In this space, a linear support vector machine (SVM) is used to perform training and testing. The experiments are carried on NLP&CC 2013 CLSC dataset, including book, DVD and music categories. Experimental results show that our approach achieves 80.68% average accuracy, which outperforms the state-of-the-art systems on this dataset. Although the BSWE are only evaluated on English-Chinese CLSC here, it can be popularized to many other languages. The major contributions of this work can be summarized as follows: • We propose bilingual sentiment word embeddings (BSWE) for CLSC based on deep learning technique. Experimental results show that the proposed BSWE significantly outperform the bilingual embeddings by incorporating sentiment information. • Instead of large-scale parallel corpora, only the labeled English corpora and Englishto-Chinese translations are required for BSWE learning. It is proved that in spite of the small-scale of training set, our approach outperforms the state-of-the-art systems in NLP&CC 2013 CLSC share task. • We employ sentiment words and their negation features rather than all words in documents to learn sentiment-specific embeddings, which significantly reduces the dimension of input vectors as well as improves sentiment classification performance. 2 Related Work In this section, we review the literature related to this paper from two perspectives: cross-language sentiment classification and embedding learning for sentiment classification. 2.1 Cross-language Sentiment Classification (CLSC) The critical problem of CLSC is how to bridge the gap between the source language and target language. Machine translations or parallel corpora are usually employed to solve this problem. We present a brief review of CLSC from two aspects: machine translation based approaches and parallel corpora based approaches. Machine translation based approaches use MT systems to project training data into the target language or test data into the source language. Wan (2009) proposed a co-training approach for CLSC. The approach first translated Chinese test data into English, and English training data into Chinese. Then, they performed training and testing in two independent views: English view and Chinese view. Gui et al. (2013) combined self-training approach with co-training approach by estimating the confidence of each monolingual system. Li et al. (2013) selected the samples in the source language that were similar to those in the target language to decrease the gap between two languages. Zhou et al. (2014a) proposed a combination CLSC model, which adopted denoising autoencoders (Vincent et al., 2008) to enhance the robustness to translation errors of the input. Most recently, a number of studies adopt deep learning technique to learn bilingual representations with parallel corpora. Bilingual representations have been successfully applied in many NLP tasks, such as machine translation (Zou et al., 2013), sentiment classification (Chandar A P et al., 2013; Zhou et al., 2014b), text classification (Chandar A P et al., 2014), etc. 431 Chandar A P et al. (2013) learned bilingual representations with aligned sentences throughout two phases: the language-specific representation learning phase and the shared representation learning phase. In the language-specific representation learning phase, they applied autoencoders to obtain a language-specific representation for each entity in two languages respectively. In shared representation learning phase, pairs of parallel language-specific representations were passed to an autoencoder to learn bilingual representations. To joint language-specific representations and bilingual representations, Chandar A P et al. (2014) integrated the two learning phases into a unified process to learn bilingual embeddings. Zhou et al. (2014b) employed bilingual representations for English-Chinese CLSC. The work mentioned above employed aligned sentences in bilingual embedding learning process. However, in the sentiment classification process, only representations in the source language are used for training, and representations in the target language are used for testing, which ignores the interactions of semantic information between the source language and target language. 2.2 Embedding Learning for Sentiment Classification Bilingual embedding learning algorithms focus on capturing syntactic and semantic similarities across languages, but ignore sentiment information. To date, many embedding learning algorithms have been developed for sentiment classification problem by incorporating sentiment information into word embeddings. Maas et al. (2011) presented a probabilistic model that combined unsupervised and supervised techniques to learn word vectors, capturing semantic information as well as sentiment information. Wang et al. (2014) introduced sentiment labels into Neural Network Language Models (Bengio et al., 2003) to enhance sentiment expression ability of word vectors. Tang et al. (2014) theoretically and empirically analyzed the effects of the syntactic context and sentiment information in word vectors, and showed that the syntactic context and sentiment information were equally important to sentiment classification. Recent years have seen a surge of interest in word embeddings with deep learning technique (Bespalov et al., 2011; Glorot et al., 2011; Socher et al., 2011; Socher et al., 2012), which have been empirically shown to preserve linguistic regularities (Mikolov et al., 2013). Our work focuses on learning bilingual sentiment word embeddings (BSWE) with deep learning technique. Unlike the work of Chandar A P et al. (2014) that adopted parallel corpora to learn bilingual embeddings, we only use training data and their translations to learn BSWE. More importantly, sentiment information is integrated into bilingual embeddings to improve their performance in CLSC. 3 Bilingual Sentiment Word Embeddings (BSWE) for Cross-language Sentiment Classification 3.1 Denoising Autoencoder It has been demonstrated that the denoising autoencoder could decrease the effects of translation errors on the performance of CLSC (Zhou et al., 2014a). This paper proposes a deep learning based approach, which employs the denoising autoencoder to learn the bilingual embeddings for CLSC. A denoising autoencoder is the modification of an autoencoder. The autoencoder (Bengio et al., 2007) includes an encoder fθ and a decoder gθ′. The encoder maps a d-dimensional input vector x ∈[0, 1]d to a hidden representation y ∈[0, 1]d′ through a deterministic mapping y = fθ(x) = σ(Wx + b), parameterized by θ = {W, b}. W is a weight matrix, b is a bias term, and σ(x) is the activation function. The decoder maps y back to a reconstructed vector ˆx = gθ′(y) = σ(WT y + c), parameterized by θ′ = {WT , c}, where c is the bias term for reconstruction. Through the process of encoding and decoding, the parameters θ and θ′ of the autoencoder will be trained by gradient descent to minimize the loss function. The sum of reconstruction crossentropies across the training set is usually used as the loss function: l(x) = − d X i=1 [xi log ˆxi+(1−xi) log(1−ˆxi)] (1) A denoising autoencoder enhances robustness to noises by corrupting the input x to a partially destroyed version ˜x. The desired noise level of the input x can be changed by adjusting the destruction fraction ν. For each input x, a fixed number νd (d is the dimension of x) of components are selected randomly, and their values are set to 0, 432 while the others are left untouched. Like an autoencoder, the destroyed input ˜x is mapped to a latent representation y = fθ(˜x) = σ(W˜x + b). Then y is mapped back to a reconstructed vector ˆx through ˆx = gθ′(y) = σ(WT y + c). The loss function of a denoising autoencoder is the same as that of an autoencoder. Minimizing the loss makes ˆx close to the input x rather than ˜x. Our BSWE learning process can be divided into two phases: the unsupervised phase of semantic learning and the supervised phase of sentiment learning. In the unsupervised phase, a denoising autoencoder is employed to learn the bilingual embeddings. In the supervised phase, the sentiment information is incorporated into the bilingual embeddings based on sentiment labels of documents to obtain BSWE. 3.2 Unsupervised Phase of the Bilingual Embedding Learning In the unsupervised phase, the English training documents and their Chinese translations are employed to learn the bilingual embeddings (Sentiment polarity labels of documents are not employed in this phase). Based on the English documents, 2,000 English sentiment words in MPQA subjectivity lexicon1 are extracted by the Chisquare method (Galavotti et al., 2000). Their corresponding Chinese translations are used as Chinese sentiment words. Besides, some sentiment words are often modified by negation words, which lead to inversion of their polarities. Therefore, negation features are introduced to each sentiment word to represent its negative form. We take into account 14 frequently-used negation words in English such as not and none; 5 negation words in Chinese such as Ø (no/not) and vk (without). A sentiment word modified by these negation words in the window [-2, 2] is considered as its negative form in this paper, while sentiment word features remain the initial meaning. Negation features use binary expressions. If a sentiment word is not modified by negation words, the value of its negation features is set to 0. Thus, the sentiment words and their corresponding negation features in English and Chinese are adopted to represent the document pairs (xE, xC). We expect that pairs of documents could be forced to capture the common semantic information of two languages. To achieve this, a denoising 1http://mpqa.cs.pitt.edu/lexicons/subj lexicon autoencoder is used to perform the reconstructions of paired documents in both English and Chinese. Figure 1 shows the framework of bilingual embedding learning. E W E x E y C xˆ [ ]T E C W W , C x ) , ( C E l x x E xˆ E x ) ( E l x C W C x C y C xˆ C x ) , ( E C l x x E xˆ E x ) ( C l x [ ]T E C W W , E x C x (a) reconstruction from xE (b) reconstruction from xC Figure 1: The framework of bilingual embedding learning. For the corrupted versions ˜xE (˜xC) of the initial input vector xE (xC), we use the sigmoid function as the activation function to extract latent representations: yE = fθ(˜xE) = σ(WE˜xE + b) (2) yC = fθ(˜xC) = σ(WC ˜xC + b) (3) where WE and WC are the language-specific word representation matrices, corresponding to English and Chinese respectively. Notice that the bias b is shared to ensure that the produced representations in two languages are on the same scale. For the latent representations in either language, we would like two decoders to perform reconstructions in English and Chinese respectively. As shown in Figure 1(a), for the latent representation yE in English, one decoder is used to map yE back to a reconstruction ˆxE in English, and the other is used to map yE back to a reconstruction ˆxC in Chinese such that: ˆxE = gθ′(yE) = σ(WT EyE + cE) (4) ˆxC = gθ′(yE) = σ(WT CyE + cC) (5) where cE and cC are the biases of the decoders in English and Chinese, respectively. Similarly, the same steps repeat for the latent representation yC in Chinese, which are shown in Figure 1(b). The encoder and decoder structures allow us to learn a mapping within and across languages. Specifically, for a given document pair (xE, xC), we can learn bilingual embeddings to reconstruct xE from itself (loss l(xE)), reconstruct xC from itself (loss l(xC)), construct xC from 433 xE (loss l(xE, xC)), construct xE from xC (loss l(xC, xE)) and reconstruct the concatenation of xE and xC ([xE, xC]) from itself (loss l([xE, xC], [ˆxE, ˆxC])). The sum of 5 losses is used as the loss function of bilingual embeddings: L =l(xE) + l(xC) + l(xE, xC) + l(xC, xE) + l([xE, xC], [ˆxE, ˆxC]) (6) 3.3 Supervised Phase of Sentiment Learning In the unsupervised phase, we have learned the bilingual embeddings, which could capture the semantic information within and across languages. However, the sentiment polarities of text are ignored in the unsupervised phase. Bilingual embeddings without sentiment information are not effective enough for sentiment classification task. This paper proposes an approach to learning BSWE for CLSC, which introduces a supervised learning phase to incorporate sentiment information into the bilingual embeddings. The process of supervised phase is shown in Figure 2. [ , ] E C W W b y  label max ( | ; ) p s d   sentiment ( | ; ) p s d  ] , [ C E x x Figure 2: The supervised learning process. For paired documents [xE, xC], the sigmoid function is adopted as the activation function to extract latent bilingual representations yb = σ([WE, WC][xE, xC]+b), where [WE, WC] is the concatenation of WE and WC. The latent bilingual representation yb is used to obtain the positive polarity probability p(s = 1|d; ξ) of a document through a sigmoid function: p(s = 1|d; ξ) = σ(ϕT yb + bl) (7) where ϕ is the logistic regression weight vector and bl is the bias of logistic regression. The sentiment label s is a Boolean value representing sentiment polarity of a document: s = 0 represents negative polarity and s = 1 represents positive polarity. Parameter ξ∗= {[WE, WC]∗, b∗, ϕ∗, b∗ l } is learned by maximizing the objective function according to the sentiment polarity label si of document di: ξ∗= arg max ξ X i=1 log p(si|di; ξ) (8) Through the supervised learning phase, [WE, WC] is optimized by maximizing sentiment polarity probability. Thus, rich sentiment information is encoded into the bilingual embeddings. The following experiments will prove that the proposed BSWE outperform the traditional bilingual embeddings significantly in CLSC. 3.4 Bilingual Document Representation Method (BDR) Once we have learned BSWE [WE, WC], whose columns are representations for sentiment words, we can use them to represent documents in two languages. Given an English training document dE containing 2,000 sentiment word features s1, s2, · · · , s2,000 and 2,000 corresponding negation features, we represent it as the TF-IDF weighted sum of BSWE: φdE = 4,000 X i=1 TF −IDF(si)WE.,si (9) Similarly, for its Chinese translation dC containing 2,000 sentiment word features t1, t2, · · · , t2,000 and 2,000 corresponding negation features, we represent it as: φdC = 4,000 X j=1 TF −IDF(tj)WC.,tj (10) We propose a bilingual document representation method (BDR) in this paper, which represents each document di with the concatenation of its English and Chinese representations [φdE, φdC]. BDR is expected to enhance the ability of sentiment expression for further improving the classification performance. Such bilingual document representations are fed to a linear SVM to perform sentiment classification. 4 Experiment 4.1 Experimental Settings Data Set. The proposed approach is evaluated on NLP&CC 2013 CLSC dataset2 3. The dataset con2http://tcci.ccf.org.cn/conference/2013/dldoc/evsam03.zip 3http://tcci.ccf.org.cn/conference/2013/dldoc/evdata03.zip 434 sists of product reviews on three categories: book, DVD, and music. Each category contains 4,000 English labeled data as training data (the ratio of the number of positive and negative samples is 1:1) and 4,000 Chinese unlabeled data as test data. Tools. In our experiments, Google Translate4 is adopted for both English-to-Chinese and Chineseto-English translation. ICTCLAS (Zhang et al., 2003) is used as Chinese word segmentation tool. A denoising autoencoder is developed based on Theano system (Bergstra et al., 2010). BSWE are trained for 50 and 30 epochs in unsupervised phase and supervised phases respectively. SV Mlight (Joachims, 1999) is used to train linear SVM sentiment classifiers Evaluation Metric. The performance is evaluated by the classification accuracy for each category, and the average accuracy of three categories, respectively. The category accuracy is defined as: Accuracyc = #system correctc #system totalc (11) where c is one of the three categories, and #system correctc and #system totalc stand for the number of being correctly classified reviews and the number of total reviews in the category c, respectively. The average accuracy is shown as: Average = 1 3 X c Accuracyc (12) 4.2 Evaluations on BSWE In this section, we evaluate the quality of BSWE for CLSC. The dimension of bilingual embeddings d is set to 50, and destruction fraction ν is set to 0.2. Effects of Bilingual Embedding Learning Methods We first compare our unsupervised bilingual embedding learning method with the parallel corpora based method. The parallel corpora based method uses the paired documents in the parallel corpus5 to learn bilingual embeddings, while our method only uses the English training documents and their Chinese translations (Sentiment polarity labels of documents are not employed here). The Boolean feature weight calculation method is 4http://translate.google.cn/ 5http://www.datatang.com/data/45485 adopted to represent documents for bilingual embedding learning and BDR is employed to represent training data and test data for sentiment classification. To represent the paired documents in the parallel corpus, 27,597 English words and 31,786 Chinese words are extracted for bilingual embedding learning. Our method only needs 2,000 English sentiment words, 2,000 Chinese sentiment words, and their negation features, which significantly reduces the dimension of input vectors. Our method Parallel corpora based method Average 0.5 0.55 0.6 0.65 0.7 0.75 0.8 Corpus Scale 104 2×104 3×104 4×104 5×104 6×104 7×104 Figure 3: Our unsupervised bilingual embedding learning method vs. Parallel corpora based method. The average accuracies on NLP&CC 2013 test data of the two bilingual embedding learning methods are shown in Figure 3. As can be seen from Figure 3, when the corpus scales of the two methods are the same (4,000 paired documents), our method (75.09% average accuracy) surpasses the parallel corpora method (54.82% average accuracy) by about 20%. With the scale of the parallel corpora increasing, the performance of parallel corpora based method is steadily improved. However, the performance is not as good as our bilingual embedding learning method. Though the document number of the parallel corpus is up to 70,000 , the average accuracy is only 70.05%. It is proved that our method is more suitable for learning bilingual embeddings for cross-language sentiment classification than the parallel corpora based method. Effects of Feature Weight in Bilingual Embeddings In this part, we compare the Boolean and TFIDF feature weight calculation methods in bilingual embedding learning process. Table 1 shows the classification accuracy with 435 Category book DVD music Average Boolean 76.22% 74.30% 74.75% 75.09% TF-IDF 76.65% 77.60% 74.50% 76.25% Table 1: The classification accuracy with the Boolean and TF-IDF methods. the Boolean and TF-IDF methods. Generally, the TF-IDF method performs better than the Boolean method. The average accuracy of the TF-IDF method is 1.16% higher than the Boolean method, which illustrates that the TF-IDF method could reflect the latent contribution of sentiment words to each document effectively. The TF-IDF weight calculation method is exploited in the following experiments. Notice that sentiment information is not yet introduced in the bilingual embeddings here. Effects of Sentiment Information in BSWE Incorporating sentiment information in the bilingual embeddings, the performance of bilingual embeddings (without sentiment information) and BSWE (with sentiment information) is compared in Figure 4. Bilingual embeddings BSWE Accurary 0.74 0.75 0.76 0.77 0.78 0.79 0.8 book DVD music Average Figure 4: Performance comparison of the bilingual embeddings and BSWE. As can be seen from Figure 4, by encoding sentiment information in the bilingual embeddings, the performance in book, DVD and music categories significantly improves to 79.47%, 78.72% and 76.58% respectively (2.82% increase in book, 1.12% in DVD, and 2.08% in music). The average accuracy reaches 78.26%, which is 2.01% higher than that of the bilingual embeddings. The experimental results indicate the effectiveness of sentiment information in the bilingual embedding learning. The BSWE learning approach is employed for CLSC in the following experiments. Effects of Bilingual Document Representation Method In this experiment, our bilingual document representation method (BDR) is compared with the following monolingual document representation methods. En-En: This method represents training and test documents in English only with WE. English training documents and Chinese-to-English translations of test documents are both represented with WE. Cn-Cn: This method represents training and test documents in Chinese only with WC. English-to-Chinese translations of training documents and Chinese test documents are both represented with WC. En-Cn: This method represents English training documents with WE, while represents Chinese test documents with WC. Chandar A P et al. (2014) employed this method in their work. BDR: This method adopts our bilingual document representation method, which represents training and test documents with both WE and WC. En-En Cn-Cn En-Cn BDR Average 0.73 0.74 0.75 0.76 0.77 0.78 0.79 0.8 ν 0 0.2 0.4 0.6 0.8 Figure 5: Effects of bilingual document representation method (BDR). Figure 5 shows the average accuracy curves of different document representation methods with different destruction fraction ν. We vary ν from 0 to 0.9 with an interval of 0.1. From Figure 5 we can see that En-En, Cn-Cn, and En-Cn get similar results. BDR performs constantly better than the other representation methods throughout the interval [0, 0.9]. The absolute superiority of BDR benefits from the enhanced ability of sentiment expression. Meanwhile, when the input x is partially de436 stroyed (ν varies from 0.1 to 0.9), the performance of En-En, Cn-Cn and En-Cn remains stable, which illustrates the robustness of the denoising autoencoder to corrupting noises. In addition, the average accuracies of BDR in the interval ν ∈[0.1, 0.9] are all higher than the average accuracy under the condition ν = 0 (78.23%). Therefore, adding noises properly to the training data could improve the performance of BSWE for CLSC. 4.3 Influences of Dimension d and Destruction Fraction ν Figure 6 shows the relationship between accuracies and dimension d of BSWE as well as that between accuracies and destruction fraction ν in autoencoders in different categories. Dimension of embeddings d varies from 50 to 500, and destruction fraction ν varies from 0.1 to 0.9. As shown in Figure 6, the average accuracies generally move upward as dimension of BSWE increasing. Generally, the average accuracies keep higher than 80% with ν varying from 0.1 to 0.5 as well as dimension varying from 300 to 500. When ν = 0.1 and d = 400, the average accuracy reaches the peak value 80.68% (category accuracy of 81.05% in book, 81.60% in DVD, and 79.40% in music). The experimental results show that in BSWE learning process, increasing the dimension of embeddings or properly adding noises to the training data helps improve the performance of CLSC. In this paper, we only evaluate BSWE when dimension d varies from 50 to 500. However, there is still space for further improvement if d continues to increase. 4.4 Comparison with Related Work Table 2 shows comparisons of the performance between our approach and some state-of-the-art systems on NLP&CC 2013 CLSC dataset. Our approach achieves the best performance with an 80.68% average accuracy. Compared with the recent related work, our approach is more effective and suitable for eliminating the language gap. Chen et al. (2014) translated Chinese test data into English and then gave different weights to sentiment words according to the subjectpredicate component of sentiment words. They got 77.09% accuracy and took the 2nd place in NLP&CC 2013 CLSC share task. The machine translation based approach was limited by the translation errors. System book DVD music Average Chen et al. (2014) 77.00% 78.33% 75.95% 77.09% Gui et al. (2013) 78.70% 79.65% 78.30% 78.89% Gui et al. (2014) 80.10% 81.60% 78.60% 80.10% Zhou et al. (2014a) 80.63% 80.95% 78.48% 80.02% Our approach 81.05% 81.60% 79.40% 80.68% Table 2: Performance comparisons on the NLP&CC 2013 CLSC dataset. Gui et al. (2013; 2014) and Zhou et al. (2014a) adopted the multi-view approach to bridge the language gap. Gui et al. (2013) proposed a mixed CLSC model by combining co-training and transfer learning strategies. They achieved the highest accuracy of 78.89% in NLP&CC CLSC share task. Gui et al. (2014) further improved the accuracy to 80.10% by removing noise from the transferred samples to avoid negative transfers. Zhou et al. (2014a) built denoising autoencoders in two independent views to enhance the robustness to translation errors in the inputs and achieved 80.02% accuracy. The multi-view approach learns language-specific classifiers in each view during training process, which is difficult to capture the common sentiment information of the two languages. Our approach integrates the bilingual embedding learning into a unified process, and outperforms Chen et al. (2014), Gui et al. (2013), Gui et al. (2014) and Zhou et al. (2014a) by 3.59%, 1.79%, 0.58%, and 0.66% respectively. The superiority of our approach benefits from the unified bilingual embedding learning process and the integration of semantic and sentiment information. 5 Conclusion and Future Work This paper proposes an approach to learning BSWE by incorporating sentiment information into the bilingual embeddings for CLSC. The proposed approach learns BSWE with the labeled documents and their translations rather than parallel corpora. In addition, BDR is proposed to enhance the sentiment expression ability which combines English and Chinese representations. Experiments on the NLP&CC 2013 CLSC dataset show that our approach outperforms the previous stateof-the-art systems as well as traditional bilingual embedding systems. The proposed BSWE are only evaluated on English-Chinese CLSC in this paper, but it can be popularized to other languages. 437 Figure 6: The relationship between accuracies and dimension d as well as that between accuracies and destruction fraction ν. Both semantic and sentiment information play an important role in sentiment classification. In the following work, we will further investigate the relationship between semantic and sentiment information for CLSC, and balance their functions to optimize their combination for CLSC. Acknowledgments We wish to thank the anonymous reviewers for their valuable comments. This research is supported by National Natural Science Foundation of China (Grant No. 61272375). References Carmen Banea, Rada Mihalcea, Janyce Wiebe and Samer Hassan. 2008. Multilingual Subjectivity Analysis Using Machine Translation. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 127-135. Association for Computational Linguistics. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A Neural Probabilistic Language Model. The Journal of Machine Learning Research, vol 3: 1137-1155. Yoshua Bengio, Pascal Lamblin, Dan Popovici, and Hugo Larochelle. 2007. Greedy layer-wise training of deep networks. In Proceedings of Advances in Neural Information Processing Systems 19 (NIPS 06), pages 153-160. MIT Press. Yoshua Bengio, Aaron Courville, and Pascal Vincent. 2013. Representation learning: A review and new perspectives. IEEE Transactions on Pattern Analysis and Machine Intelligence 35(8): 1798-1828. IEEE. James Bergstra, Olivier Breuleux, Frederic Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for scientific computing conference (SciPy). Dmitriy Bespalov, Bing Bai, Yanjun Qi, and Ali Shokoufandeh. 2011. Sentiment classification based on supervised latent n-gram analysis. In Proceedings of the Conference on Information and Knowledge Management, pages 375-382. ACM. Sarath Chandar A P, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar and Amrita Saha. 2013. Multilingual deep learning. In Deep Learning Workshop at NIPS 2013. Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh M Khapra, Balaraman Ravindran, Vikas Raykar, and Amrita Saha. 2014. An autoencoder approach to learning bilingual word representations. In Advances in Neural Information Processing Systems, pages 1853-1861. 438 Qiang Chen, Yanxiang He, Xule Liu, Songtao Sun, Min Peng, and Fei Li. 2014. Cross-Language Sentiment Analysis Based on Parser (in Chinese). Acta Scientiarum Naturalium Universitatis Pekinensis, 50 (1): 55-60. G. E. Hinton and R. R. Salakhutdinov. 2006. Reducing the Dimensionality of Data with Neural Networks. Science, vol 313: 504-507. Luigi Galavotti, Fabrizio Sebastiani, and Maria Simi. 2000. Feature Selection and Negative Evidence in Automated Text Categorization. In Proceedings of ECDL-00, 4th European Conference on Research and Advanced Technology for Digital Libraries. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proceedings of 28th International Conference on Machine Learning, pages 513-520. Lin Gui, Ruifeng Xu, Jun Xu, Li Yuan, Yuanlin Yao, Jiyun Zhou, Qiaoyun Qiu, Shuwei Wang, KamFai Wong, and Ricky Cheung. 2013. A mixed model for cross lingual opinion analysis. In Proceedings of Natural Language Processing and Chinese Computing, pages 93-104. Springer Verlag. Lin Gui, Ruifeng Xu, Qin Lu, Jun Xu, Jian Xu, Bin Liu, and Xiaolong Wang. 2014. Cross-lingual Opinion Analysis via Negative Transfer Detection. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Short Papers), pages 860-865. Association for Computational Linguistics. Thorsten Joachims. 1999. Making large-Scale SVM Learning Practical. Universit¨at Dortmund. Alistair Kennedy and Diana Inkpen. 2006. Sentiment classification of movie reviews using contextual valence shifters. Computational intelligence, 22(2): 110-125. Shoushan Li, Rong Wang, Huanhuan Liu, and ChuRen Huang. 2013. Active learning for cross-lingual sentiment classification. In Proceedings of Natural Language Processing and Chinese Computing, pages 236-246. Springer Verlag. Andrew L. Maas, Raymond E. Daly, Peter T. Pham, Dan Huang, Andrew Y. Ng, and Christopher Potts. 2011. Learning Word Vectors for Sentiment Analysis. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 142-150. Association for Computational Linguistics. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of NAACLHLT, pages 746-751. Association for Computational Linguistics. Bo Pang, Lillian Lee and Shivakumar Vaithyanathan. 2002. Thumbs up? Sentiment classification using machine learning techniques. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 79-86. ACM. Bo Pang and Lillian Lee. 2004. A sentimental education: sentiment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 271-278. Association for Computational Linguistics. Richard Socher, Cliff Chiung-Yu Lin, Andrew Y. Ng, and Christopher D. Manning. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the International Conference on Machine Learning, pages 129-136. Bellevue. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic Compositionality through Recursive Matrix-Vector Spaces. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1201-1211. Association for Computational Linguistics. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning SentimentSpecific Word Embedding for Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistic, pages 1555-1565. Association for Computational Linguistics. Xuewei Tang and Xiaojun Wan. 2014. Learning Bilingual Embedding Model for Cross-language Sentiment Classification. In Proceedings of 2014 IEEE/WIC/ACM International Joint Conferences on Web Intelligence (WI) and Intelligent Agent Technologies (IAT), pages 134-141. IEEE. Pascal Vincent, Hugo Larochelle, Yoshua Bengio, and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoencoders. In Proceedings of the 25th international conference on Machine learning, pages 1096-1103. ACM. Xiaojun Wan. 2009. Co-training for cross-lingual sentiment classification. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 235243. Association for Computational Linguistics. Yuan Wang, Zhaohui Li, Jie Liu, Zhicheng He, Yalou Huang, and Dong Li. 2014. Word Vector Modeling for Sentiment Analysis of Product Reviews. In Proceedings of Natural Language Processing and Chinese Computing, pages 168-180. Springer Verlag. 439 Huaping Zhang, Hongkui Yu, Deyi Xiong, and Qun Liu. 2003. HHMM-based Chinese Lexical Analyzer ICTCLAS. In 2nd SIGHAN workshop affiliated with 41th ACL, pages 184-187. Association for Computational Linguistics. Guangyou Zhou, Tingting He, and Jun Zhao. 2014b. Bridging the Language Gap: Learning Distributed Semantics for Cross-Lingual Sentiment Classification. In Proceedings of Natural Language Processing and Chinese Computing, pages 138-149. Springer Verlag. Huiwei Zhou, Long Chen, and Degen Huang. 2014a. Cross-lingual sentiment classification based on denoising autoencoder. In Proceedings of Natural Language Processing and Chinese Computing, pages 181-192. Springer Verlag. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual Word Embedding for Phrase-Based Machine Translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 13931398. 440
2015
42
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 441–450, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Content Models for Survey Generation: A Factoid-Based Evaluation Rahul Jha⋆, Catherine Finegan-Dollak⋆, Reed Coke⋆, Ben King⋆, Dragomir Radev⋆† ⋆Department of EECS, University of Michigan, USA † School of Information, University of Michigan, USA {rahuljha,cfdollak,reedcoke,benking,radev}@umich.edu Abstract We present a new factoid-annotated dataset for evaluating content models for scientific survey article generation containing 3,425 sentences from 7 topics in natural language processing. We also introduce a novel HITS-based content model for automated survey article generation called HITSUM that exploits the lexical network structure between sentences from citing and cited papers. Using the factoid-annotated data, we conduct a pyramid evaluation and compare HITSUM with two previous state-of-the-art content models: C-Lexrank, a network based content model, and TOPICSUM, a Bayesian content model. Our experiments show that our new content model captures useful survey-worthy information and outperforms C-Lexrank by 4% and TOPICSUM by 7% in pyramid evaluation. 1 Introduction Survey article generation is the task of automatically building informative surveys for scientific topics. Given the rapid growth of publications in scientific fields, the development of such systems is crucial as human-written surveys exist for a limited number of topics and get outdated quickly. In this paper, we investigate content models for extracting survey-worthy information from scientific papers. Such models are an essential component of any system for automatic survey article generation. Earlier work in the area of survey article generation has investigated content models based on lexical networks (Mohammad et al., 2009; Qazvinian and Radev, 2008). These models take as input citing sentences that describe important papers on the topic and assign them a salience score based on centrality in a lexical network formed by the input citing sentences. In this Factoid Weight Question Answering answer extraction 6 question classification 6 definition of question answering 5 TREC QA track 5 information retrieval 5 Dependency Parsing non-projective dependency structures / trees 6 projectivity / projective dependency trees 6 deterministic parsing approaches: Nivre’s algorithm 5 terminology: head - dependent 4 grammar driven approaches for dependency parsing 4 Table 1: Sample factoids from the topics of question answering and dependency parsing along with their factoid weights. paper, we propose a new content model based on network structure previously unexplored for this task that exploits the lexical relationship between citing sentences and the sentences from the original papers that they cite. Our new formulation of the lexical network structure fits nicely with the hubs and authorities model for identifying important nodes in a network (Kleinberg, 1999), leading to a new content model called HITSUM. In addition to this new content model, we also describe how Bayesian content models previously explored in the news domain can be adapted for the content modeling task for survey generation. For the task of evaluating various content models discussed in this paper, we have annotated a total of 3,425 sentences across 7 topics in the field of natural language processing with factoids from each of the topics. The factoids we use were extracted from existing survey articles and tutorials on each topic (Jha et al., 2013), and thus represent information that must be captured by a survey article on the corresponding topic. Each of the factoids is assigned a weight based on its frequency in the surveys/tutorials, which allows us to do pyra441 Topic # Sentences dependency parsing 487 named entity recognition 383 question answering 452 semantic role labeling 466 sentiment analysis 613 summarization 507 word sense disambiguation 425 Table 2: List of seven NLP topics used in our experiments along with input size. mid evaluation of our content models. Some sample factoids are shown in Table 1. Evaluation using factoids extracted from existing survey articles can help us understand the limits of automated survey article generation and how well these systems can be expected to perform. For example, if certain kinds of factoids are missing consistently from our input sentences, improvements in content models are unlikely to get us closer to the goal of generating survey articles that match those generated by humans, and effort must be directed to extracting text from other sources that will contain the missing information. On the other hand, if most of the factoids exist in the input sentences but important factoids are not found by the content models, we can think of strategies for improving these models by doing error analysis. The main contributions of this paper are: • HITSUM, a new HITS-based content model for automatic survey generation for scientific topics. • A new dataset of 3,425 factoid-annotated sentences for scientific articles in 7 topics. • Experimental results for pyramid evaluation comparing three existing content models (Lexrank, C-Lexrank, TOPICSUM) with HITSUM. The rest of this paper is organized as follows. Section 2 describes the dataset used in our experiment and the factoid annotation process. Section 3 describes each of the content models used in our experiments including HITSUM. Section 4 describes our experiments and Section 5 summarizes the results. We summarize the related work in Section 6 and conclude in Section 7. 2 Data Prior research in automatic survey generation has explored using text from different parts of scientific papers. Some of the recent work has treated survey generation as a direct extension of single paper summarization (Qazvinian and Radev, 2008) and used citing sentences to a set of relevant papers as the input for the summarizer (Mohammad et al., 2009; Qazvinian et al., 2013). However, in our prior work, we have observed that it’s difficult to generate coherent and readable summaries using just citing sentences and have proposed the use of sentences from introductory texts of papers that cite a number of important papers on a topic (Jha et al., 2015). The use of full text allows for the use of discourse structure of these documents in framing coherent and readable surveys. Since the content models we explore are meant to be part of a larger system that should be able to generate coherent and readable survey articles, we use the introduction sentences for our experiments as well. The corpus we used for extracting our experimental data was the ACL Anthology Network, a comprehensive bibliographic dataset that contains full text and citations for papers in most of the important venues in natural language processing (Radev et al., 2013). An oracle method is used for selecting the initial set of papers for each topic. For each topic, the bibliographies of at least three human-written surveys were extracted, and any papers that appeared in more than one survey were added to the target document set for the topic. The text for summarization is extracted from introductory sections of papers that cite papers in the target document set. The intuition behind this is that the introductory sections of papers that cite these target document summarize the research in papers from the target document set as well as the relationships between these papers. Thus, these introductions can be thought of as mini-surveys for specific aspects of the topic; combining text from these introductory sections should allow us to generate good comprehensive survey articles for the topic1. For our experiments, we sort the citing papers based on the number of papers they cite 1Other sections of papers might have such information, e.g. related work. Initial data analysis showed, however, that not all papers in our corpus had related work sections. Thus for consistency, we decided to use introduction sections. The perfect system for this task would be able to extract ”related work style” text segments from an entire paper. 442 Input sentence Factoids According to [1] , the corpus based supervised machine learning methods are the most successful approaches to WSD where contextual features have been used mainly to distinguish ambiguous words in these methods. supervised wsd, corpus based wsd Compared with supervised methods, unsupervised methods do not require tagged corpus, but the precision is usually lower than that of the supervised methods. supervised wsd, unsupervised wsd Word sense disambiguation (WSD) has been a hot topic in natural language processing, which is to determine the sense of an ambiguous word in a specific context. definition of word sense disambiguation Improvement in the accuracy of identifying the correct word sense will result in better machine translation systems, information retrieval systems, etc. wsd for machine translation, wsd for information retrieval The SENSEVAL evaluation framework ( Kilgarriff 1998 ) was a DARPA-style competition designed to bring some conformity to the field of WSD, although it has yet to achieve that aim completely. senseval Table 3: Sample input sentences from the topic of word sense disambiguation annotated with factoids. in the target document set, pick the top 20 papers, and extract sentences from their introductions to form the input text for the summarizer. The seven topics used in our experiments and input size for each topic are shown in Table 2. Once the input text for each topic has been extracted, we annotate the sentences in the input text with factoids for that topic. Some annotated sentences in the topic of word sense disambiguation are shown in Table 3. Given this new annotated data, we can compare how the factoids are distributed across different citing sentences (as annotated by Jha et al. (2013)) and introduction sentences that we have annotated. For this, we divide the factoids into five categories: definitions, venue, resources, methodology, and applications. The fractional distribution of factoids in these categories is shown in Table 4. We can see that the distribution of factoids relating to venues, methodology and applications is similar for the two datasets. However, factoids related to definitional sentences are almost completely missing in the citing sentences data. This lack of background information in citing sentences is one of the motivations for using introduction sentences for survey article generation as opposed to previous work. The complete set of factoids as well as annotated sentences for all the topics is available for download at http: //clair.si.umich.edu/corpora/ Surveyor_CM_Data.tar.gz. 3 Content Models We now describe each of the content models used in our experiments. Factoid category % Citing % Intro definitions 0 4 venue 6 6 resources 18 2 methodology 70 83 applications 6 5 Table 4: Fractional distribution of factoids across various categories in citing sentences vs introduction sentences. 3.1 Lexrank Lexrank is a network-based content selection algorithm that serves as a baseline for our experiments. Given an input set of sentences, it first creates a network using these sentences where each node represents a sentence and each edge represents the tf-idf cosine similarity between the sentences. Two methods for creating the network are possible. First, we can remove all edges that are lower than a certain threshold of similarity (generally set to 0.1). The Lexrank value for a node p(u) in this case is calculated as: 1 −d N + d X v∈adj[u] p(v) deg(v) Where N is the total number of sentences, d is the damping factor that controls the probability of a random jump (usually set to 0.85), deg(v) is the degree of the node v, and adj[u] is the set of nodes connected to the node u. A different way of creating the network is to treat the sentence similarities as edge weights and use the adjacency matrix as a transition matrix after normalizing the rows; the formula then becomes: 443 A dictionary such as the LDOCE has broad coverage of word senses, useful for WSD . This paper describes a program that disambiguates English word senses in unrestricted text using statistical models of the major Roget’s Thesaurus categories. Our technique offers benefits both for online semantic processing and for the challenging task of mapping word senses across multiple MRDs in creating a merged lexical database. The words in the sentences may be any of the 28,000 headwords in Longman’s Dictionary of Contemporary English (LDOCE) and are disambiguated relative to the senses given in LDOCE. This paper describes a heuristic approach to automatically identifying which senses of a machinereadable dictionary (MRD) headword are semantically related versus those which correspond to fundamentally different senses of the word. Figure 1: A sentence from Pciting with a high hub score (bolded) and some of sentences from Pcited that it links to (italicised). The sentence from Pciting obtain a high hub score by being connected to the sentences with high authority scores. 1 −d N + d X v∈adj[u] cos(u, v) TotalCosv p(v) Where cos(u, v) gives the tf-idf cosine similarity between sentence u and v and TotalCosv = P z∈adj[v] cos(z, v). In our experiments, we employ this second formulation. The above equation can be solved efficiently using the power method (Newman, 2010) to obtain p(u) for each node, which is then used as the score for ordering the sentences. The final Lexrank values p(u) for a node represent the stationary distribution of the Markov chain represented by the transition matrix. Lexrank has been shown to perform well in summarization experiments (Erkan and Radev, 2004). 3.2 C-Lexrank C-Lexrank is a clustering-based summarization system that was proposed by Qazvinian and Radev (2008) to summarize different perspectives in citing sentences that reference a paper or a topic. To create summaries, C-LexRank constructs a fully connected network in which vertices are sentences, and edges are cosine similarities calculated using the tf-idf vectors of citation sentences. It then employs a hierarchical agglomeration clustering algorithm proposed by Clauset et al. (2004) to find communities of sentences that discuss the same scientific contributions. Once the graph is clustered and communities are formed, the method extracts sentences from different clusters to build a summary. It iterates through the clusters from largest to smallest, choosing the most salient sentence of each cluster, until the summary length limit is reached. The salience of a sentence in its cluster is defined as its Lexrank value in the lexical network formed by sentences in the cluster. 3.3 HITSUM The input set of sentences in our data come from introductory sections of papers that cite important papers on a topic. We’ll refer to the set of citing papers that provide the input text for the summarizer as Pciting and the set of important papers that represent the research we are trying to summarize as Pcited. Both Lexrank and C-Lexrank work by finding central sentences in a network formed by the input sentences and thus, only use the lexical information present in Pciting, while ignoring additional lexical information from the papers in Pcited. We now present a formulation that uses the network structure that exists between the sentences in the two sets of papers to incorporate additional lexical information into the summarization system. This system is based on the hubs and authorities or the HITS model (Kleinberg, 1999) and hence is called HITSUM. HITSUM, in addition to the sentences from the introductory sections of papers in Pciting, also uses sentences from the abstracts of Pcited. It starts by computing the tf-idf cosine similarity between the sentences of each paper pi ∈Pciting with the sentences in the abstracts of each paper pj ∈Pcited that is directly cited by pi. A directed edge is created between every sentence si in pi and sj in pj if sim(si, sj) > smin, where smin is a similarity threshold (set to 0.1 for our experiments). Once this process has been completed for all papers in Pciting, we end up with a bipartite graph between sentences from Pciting and Pcited. In this bipartite graph, sentences in Pcited that 444 φB φC/QA φD/J07−1005 φC/NER φD/I08−1071 the 0.066 question 0.044 metathesaurus 0.00032 ne 0.028 wikipedia 0.0087 of 0.040 questions 0.038 umls 0.00032 entity 0.022 pages 0.0053 and 0.034 answer 0.028 biomedical 0.00024 named 0.022 million 0.0018 a 0.029 answering 0.022 relevance 0.00024 entities 0.017 extracting 0.0018 in 0.027 qa 0.021 citation 0.00024 ner 0.014 articles 0.0018 to 0.027 answers 0.017 wykoff 0.00024 names 0.009 contributors 0.0018 is 0.017 2001 0.016 bringing 0.00016 location 0.008 version 0.0009 for 0.014 system 0.011 appropriately 0.00016 tagging 0.007 dakka 0.0009 that 0.012 trec 0.008 organized 0.00016 recognition 0.007 service 0.0009 we 0.011 factoid 0.008 foundation 0.00016 classes 0.007 academic 0.0009 Figure 2: Top words from different word distributions learned by TOPICSUM on our input document set of 15 topics. φB is the background word distribution that captures stop words. φC/QA and φC/NER are the word distributions for the topics of question answering and named entity recognition respectively. φD/J07−1005 is the document-specific word distribution for a single paper in question answering that focuses on clinical question answering. φD/I08−1071 is the document-specific word distribution for a single paper in named entity recognition that focuses on named entity recognition in Wikipedia articles. have a lot of incoming edges represent sentences that presented important contributions in the field. Similarly, sentences in Pciting that have a lot of outgoing edges represent sentences that summarize a number of important contributions in the field. This suggests using the HITS algorithm, which, given a network, assigns hubs and authorities scores to each node in the network in a mutually reinforcing way. Thus, nodes with high authority scores are those that are pointed to by a number of good hubs, and nodes with high hub scores are those that point to a number of good authorities. This can be formalized with the following equation for the hub score of a node: h(v) = X u∈successors(v) a(u) Where h(v) is the hub score for node v, successors(v) is the set of all nodes that v has an edge to, and a(u) is the authority score for node u. Similarly, the authority score for each node is computed as: a(v) = X u∈predecessors(v) h(u) Where predecessors(v) is the set of all nodes that have an edge to v. The hub and authority score for each node can be computed using the power method that starts with an initial value and iteratively updates the scores for each node based on the above equations until the hub and authority scores for each node converge to within a tolerance value (set to 1E-08 for our experiments). In our bipartite lexical network, we expect sentences in Pcited receiving high authority scores to be the ones reporting important contributions and sentences in Pciting that receive high hub scores to be sentences summarizing important contributions. Figure 1 shows an example of a sentence with a high hub score from the topic of word sense disambiguation, along with some of the sentences that it points to. HITSUM computes the hub and authority score for each sentence in the lexical network and then uses the hub scores for sentences in Pciting as their relevance score. Sentences from Pcited are part of the lexical network, but are not used in the output summary. 3.4 TOPICSUM TOPICSUM is a probabilistic content model presented in Haghighi and Vanderwende (2009) and is very similar to an earlier model called BayesSum proposed by Daum´e and Marcu (2006). It is a hierarchical, LDA (Latent Dirichlet Allocation) style model that is based on the following generative story:2 words in any sentence in the corpus can come from one of three word distributions: a background word distribution φB that flexibly models stop words, a content word distribution φC for each document set that models content relevant to the entire document set, and a document-specific word distribution φD. The word distributions are learned using Gibbs sampling. Given n document sets each with k doc2To avoid confusion in use of the term “topic,” in this paper we refer to topics in the LDA sense as “word distributions.” “Topics” in this paper refer to the natural language processing topics such as question answering, word sense disambiguation, etc. 445 Topic Lexrank C-Lexrank TOPICSUM HITSUM dependency parsing 0.47 0.76 0.62 1.00∗ named entity recognition 0.80 0.89 0.90∗ 0.80 question answering 0.65 0.67 0.65 0.76∗ sentiment analysis 0.64 0.62 0.75∗ 0.63 semantic role labeling 0.75∗ 0.67 0.65 0.69 summarization 0.52 0.75∗ 0.57 0.68 word sense disambiguation 0.78 0.66 0.67 0.79∗ Average 0.66 0.72 0.69 0.76∗ Table 5: Pyramid scores obtained by different content models for each topic along with average scores for each model across all topics. For each topic as well as the average, the best performing method has been highlighted with a ∗. uments, we get n content word distributions and n ∗k document-specific distributions leading to a total of 1 + n + n ∗k word distributions. To illustrate the kind of distributions TOPICSUM learns in our dataset, Figure 2 shows the top words along with their probabilities from the background word distribution, two content distributions and two document-specific word distributions. We see that the model effectively captures general content words for each topic. φC/QA is the word distribution for the topic of question answering, while φD/J07−1005 is the document-specific word distribution for a specific paper in the document set for question answering3 that focuses on clinical question answering. The word distribution φD/J07−1005 contains words that are relevant to the specific subtopic in the paper, while φC/QA contains content words relevant to the general topic of question answering. Similar results can be seen in the word distributions for named entity recognition φC/NER and the document-specific word distribution for a specific paper in the topic φD/I08−10714 that focuses on comparable entity mining. These topics, learned using Gibbs sampling, can be used to select sentences for a summary in the following way. To summarize a document set, we greedily select sentences that minimize the KLdivergence of our summary to the document-setspecific topic. Thus, the score for each sentence s is KL(φC||Ps) where Ps is the sentence word distribution with add-one smoothing applied to both distributions. Using this objective, sentences that 3Dina Demner-Fushman and Jimmy Lin. 2007. Answering Clinical Questions with Knowledge-Based and Statistical Techniques. Computational Linguistics. 4Wisam Dakka and Silviu Cucerzan. 2008. Augmenting wikipedia with named entity tags. In Proceedings of IJCNLP. contain words from the content word distribution with high probability are more likely to be selected in the generated summary. We implemented TOPICSUM in Python using Numpy and then optimized it using Scipy Weave. This code is available for use at https://github.com/rahuljha/ content-models. The repository also contains Python code for HITSUM. 4 Experiments For evaluating our content models, we generated 2,000-character-long summaries using each of the systems (Lexrank, C-Lexrank, HITSUM, and TOPICSUM) for each of the topics. The summaries are generated by ranking the input sentences using each content model and picking the top sentences till the budget of 2,000 characters is reached. Each of these summaries is then given a pyramid score (Nenkova and Passonneau, 2004) computed using the factoids assigned to each sentence. For the pyramid evaluation, the factoids are organized in a pyramid of order n. The top tier in this pyramid contains the highest weighted factoids, the next tier contains the second highest weighted factoids, and so on. The score assigned to a summary is the ratio of the sum of the weights of the factoids it contains to the sum of weights of an optimal summary with the same number of factoids. Pyramid evaluation allows us to capture how each content model performs in terms of selecting sentences with the most highly weighted factoids. Since the factoids have been extracted from human-written surveys and tutorials on each of the topics, the pyramid score gives us an idea of the survey-worthiness of the sentences selected by 446 Question classification is a crucial component of modern question answering system. A what-type question is defined as the one whose question word is ‘what’, ‘which’, ‘name’ or ‘list’. This metaclassifier beats all published numbers on standard question classification benchmarks [4.4]. Due to its challenge, this paper focuses on what-type question classification. In this paper, we focus on fine-category classification. The promise of a machine learning approach is that the QA system builder can now focus on designing features and providing labeled data, rather than coding and maintaining complex heuristic rule bases. Figure 3: Part of the summary generated by HITSUM for the topic of question answering. each content model. 5 Results and Discussion The results of pyramid evaluation are summarized in Table 5. It shows the pyramid score obtained by each system on each of the topics as well as the average score. The highest performing system on average is HITSUM with an average performance of 76%. HITSUM does especially well for the topics of dependency parsing, question answering, and word sense disambiguation. The second best performing system is C-Lexrank, which is not surprising because it was developed specifically for the task of scientific paper summarization. However, HITSUM outperforms C-Lexrank on several topics and by 4% on average. Figure 3 shows part of the summary generated by HITSUM for the topic of question answering. The summary contains mostly informative sentences about different aspects of question answering. One obvious drawback of this summary is that it’s not very coherent and readable. However, previous work has shown how network based content models can be combined with discourse models to generate informative yet readable summaries (Jha et al., 2015). We looked at some of the network statistics of the lexical networks used by HITSUM. One of the things we noticed is that the lexical networks for topics where HITSUM performs well seem to have higher degree assortativity compared to the topics for which it doesn’t perform well. High degree assortativity in lexical networks means sentences with high degree tend to be linked to other sentences with high degree. This suggests that HITS performs well for topics where a set of important factoids are mentioned in many citing and source sentences. A larger evaluation dataset is needed for a more thorough analysis of how the network properties of these lexical networks correlate with the performance of various content models. TOPICSUM does well on the topics of named entity recognition and sentiment analysis, but does not do well on average. This can be attributed to the fact that it was developed as a content model for the domain of news summarization and does not translate well to our domain. All systems outperform Lexrank, which achieves the lowest average score. This result is also intuitive, because every other system in our evaluation uses additional information not used by Lexrank: C-Lexrank exploits the community structure in the input set of sentences, HITSUM exploits the lexical information from cited sentences, and TOPICSUM exploits information about global word distribution across all topics. The different systems we tried in our evaluation depend on using different lexical information and seem to perform well for different topics. This suggests that further gains can be made by combining these systems. For example, C-Lexrank and HITSUM can be combined by utilizing both the network formed by citing sentences and the network between the citing sentences and the cited sentences into a larger lexical network. TOPICSUM scores can be combined with these networkbased system by using the TOPICSUM scores as a prior for each node, and then running either Pagerank or HITS on top of it. We leave exploration of such hybrid systems to future work. 6 Related Work The goal of content models in the context of summarization is to extract a representation from input text that can help in identifying important sentences that should be in the output summary. Our work is related to two main classes of content models: network-based methods and probabilis447 tic methods. We summarize related work for each of these classes of content models, followed by a short summary of the related work in the domain of scientific summarization. Network-based content models: Networkbased content models (Erkan and Radev, 2004; Mihalcea and Tarau, 2004) work by converting the input sentences into a network. Each sentence is represented by a node in the network, and the edges between sentences are given weight based on the similarities of sentences. They then run Pagerank on this network, and sentences are selected based on their Pagerank score in the network. For computing Pagerank, the network can either be pruned by removing edges that have weights less than a certain threshold, or a weighted version of Pagerank can be run on the network. The method can also be modified for query-focused summarization (Otterbacher et al., 2009). C-Lexrank (Qazvinian and Radev, 2008) modifies Lexrank by first running a clustering algorithm on the network to partition the network into different communities and then selecting sentences from each community by running Lexrank on the sub-network within each community. C-Lexrank was also used in the task of automated survey generation with encouraging results (Mohammad et al., 2009). Probabilistic content models: One of the first probabilistic content models seems to be BAYESSUM (Daum´e and Marcu, 2006), designed for query-focused summarization. BAYESSUM models a set of document collections using a hierarchical LDA style model. Each word in a sentence can be generated using one of three language models: 1) a general English language model that captures English filler or background knowledge, 2) a document-specific language model, and 3) a query language model. These language models are inferred using expectation propagation, and then sentences are ranked based on their likelihood of being generated from the query language model. A similar model for general multidocument summarization called TOPICSUM was proposed by Haghighi and Vanderwende (2009), where the query language model is replaced by a documentcollection-specific language model; thus sentences are selected based on how likely they are to contain information that summarizes the entire document collection instead of information pertaining to individual documents or background knowledge. Barzilay and Lee (2004) present a Hidden Markov Model (HMM) based content model where the hidden states of the HMM represent the topics in the text. The transition probabilities are learned through Viterbi decoding. They show that the HMM model can be used for both reordering of sentences for coherence and discriminative scoring of sentences for extractive summarization. Fung and Ngai (2006) present a similar HMM-based model for multi-document summarization. Jiang and Zhai (2005) proposed an HMM-based model for the problem of extracting coherent passages relevant to a query from a relevant document. They learn an HMM with two background states (B1 and B2) and a queryrelevant state (R), each associated with a language model. The HMM starts in background state B1, switches to relevant state R and then switches to the next background state B2. The sentences that the HMM emits while in R constitute the queryrelevant passage from the document. Scientific summarization: Early work in scientific summarization used abstracts of scientific articles to produce summaries of specific scientific papers (Kupiec et al., 1995). However, later work (Elkiss et al., 2008) showed that citation sentences are as important in understanding the main contributions of a paper. Nanba and Okumura (1999) explored using reference information to build a system for supporting writing survey articles. Their system extracts citing sentences that describe a referred paper and identify the type of reference relationships. The type of references can be one of the three: 1) type B that base on other researcher’s theory, 2) type C that compare with related works, or 3) type O representing relationships other than B or C. They posit that type C sentences are the most important for survey generation and can help show the similarities and differences among cited papers. Teufel and Moens (2002) propose a method for summarizing scientific articles based on rhetorical status of sentences in scientific articles. They annotate sentences in a corpus of 80 scientific articles with rhetorical status, where the rhetorical status can be one of aim (specific research goal), textual (section structure), own (neutral description of own work), background (generally accepted background), contrast (comparison with other work), 448 basis (agreement with or continuation of other work), and other (neutral description of other’s work). They describe classifiers for tagging the rhetorical status of sentences automatically and present a method for using this to assign relevance score to sentences. In other work, Kan et al. (2002) use a corpus of 2000 annotated bibliographies for scientific papers as a first step towards a supervised summarization system. They found that summaries in their corpus were mostly single-document abstractive summaries that were both indicative and informative and were organized around a “theme,” making them ideal for query-based summarization. Mei and Zhai (2008) presented an impact-based summarization method for single-paper summarization that assigns relevance scores to sentences in a paper based on their similarity to the set of citing sentences that reference the paper. More recently, Hoang and Kan (2010) present a method for automated related work generation. Their system takes as input a set of keywords arranged in a hierarchical fashion that describes a target paper’s topic. They hypothesize that sentences in a related work provide either background information or specific contributions. They use two different models to extract these two kinds of sentences using the input tree and combines them to create the final output summary. Zhang et al. (2013) explore methods for biomedical summarization by identifying cliques in a network of semantic predications extracted from citations. These cliques are then clustered and labeled to identify different points of view represented in the summary. 7 Conclusion and Future Work We have presented a new factoid-annotated dataset for evaluating content models for scientific survey article generation by annotating sentences from seven topics in natural language processing. We also introduce a new HITS-based content model called HITSUM for survey article generation that exploits the lexical information from cited papers along with citing papers to rank input sentences for survey-worthiness. We conduct pyramid evaluation using our factoid dataset to compare HITSUM with existing network-based methods (Lexrank, C-Lexrank) as well as methods based on Bayesian content modeling (TOPICSUM). On average, HITSUM outperforms C-Lexrank by 4% and TOPICSUM by 7%. Since the different content models use different kinds of lexical information, further gains might be obtained by combining some of these models into a joint model. We plan to explore this in future work. References Regina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Daniel Marcu Susan Dumais and Salim Roukos, editors, HLTNAACL 2004: Main Proceedings, pages 113–120, Boston, Massachusetts, USA, May 2 - May 7. Association for Computational Linguistics. Aaron Clauset, Mark E. J. Newman, and Cristopher Moore. 2004. Finding community structure in very large networks. Phys. Rev. E, 70(6):066111, Dec. Hal Daum´e, III and Daniel Marcu. 2006. Bayesian query-focused summarization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 305–312, Stroudsburg, PA, USA. Association for Computational Linguistics. Aaron Elkiss, Siwei Shen, Anthony Fader, G¨unes¸ Erkan, David States, and Dragomir R. Radev. 2008. Blind men and elephants: What do citation summaries tell us about a research article? Journal of the American Society for Information Science and Technology, 59(1):51–62. G¨unes¸ Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based centrality as salience in text summarization. Journal of Artificial Intelligence Research (JAIR). Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Trans. Speech Lang. Process., 3(2):1–16, July. Aria Haghighi and Lucy Vanderwende. 2009. Exploring content models for multi-document summarization. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 362–370, Stroudsburg, PA, USA. Association for Computational Linguistics. Cong Duy Vu Hoang and Min-Yen Kan. 2010. Towards automated related work summarization. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, COLING ’10, pages 427–435, Stroudsburg, PA, USA. Association for Computational Linguistics. Rahul Jha, Amjad Abu-Jbara, and Dragomir R. Radev. 2013. A system for summarizing scientific topics 449 starting from keywords. In Proceedings of The Association for Computational Linguistics (short paper). Rahul Jha, Reed Coke, and Dragomir R. Radev. 2015. Surveyor: A system for generating coherent survey articles for scientific topics. In Proceedings of the Twenty-Ninth AAAI Conference. Jing Jiang and ChengXiang Zhai. 2005. Accurately extracting coherent relevant passages using hidden Markov models. pages 289–290. Min-Yen Kan, Judith L. Klavans, and Kathleen R. McKeown. 2002. Using the Annotated Bibliography as a Resource for Indicative Summarization. In The International Conference on Language Resources and Evaluation (LREC), Las Palmas, Spain. Jon M. Kleinberg. 1999. Authoritative sources in a hyperlinked environment. J. ACM, 46:604–632, September. Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR-95), pages 68–73. Qiaozhu Mei and ChengXiang Zhai. 2008. Generating impact-based summaries for scientific literature. In Proceedings of the 46th Annual Conference of the Association for Computational Linguistics (ACL-08), pages 816–824. Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into texts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP-04), July. Saif Mohammad, Bonnie Dorr, Melissa Egan, Ahmed Hassan, Pradeep Muthukrishan, Vahed Qazvinian, Dragomir Radev, and David Zajic. 2009. Using citations to generate surveys of scientific paradigms. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, NAACL ’09, pages 584–592, Stroudsburg, PA, USA. Association for Computational Linguistics. Hidetsugu Nanba and Manabu Okumura. 1999. Towards multi-paper summarization using reference information. In Proceedings of the 16th International Joint Conference on Artificial Intelligence (IJCAI-99), pages 926–931. Ani Nenkova and Rebecca Passonneau. 2004. Evaluating content selection in summarization: The pyramid method. In Proceedings of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies (HLTNAACL ’04). Mark E. J. Newman. 2010. Networks: An Introduction. Oxford University Press. Jahna Otterbacher, Gunes Erkan, and Dragomir R. Radev. 2009. Biased lexrank: Passage retrieval using random walks with question-based priors. Inf. Process. Manage., 45(1):42–54, January. Vahed Qazvinian and Dragomir R. Radev. 2008. Scientific paper summarization using citation summary networks. In Proceedings of the 22nd International Conference on Computational Linguistics (COLING-08), Manchester, UK. Vahed Qazvinian, Dragomir R. Radev, Saif M. Mohammad, Bonnie Dorr, David Zajic, Michael Whidby, and Taesun Moon. 2013. Generating extractive summaries of scientific paradigms. J. Artif. Int. Res., 46(1):165–201, January. Dragomir R. Radev, Pradeep Muthukrishnan, Vahed Qazvinian, and Amjad Abu-Jbara. 2013. The acl anthology network corpus. Language Resources and Evaluation, pages 1–26. Simone Teufel and Marc Moens. 2002. Summarizing scientific articles: experiments with relevance and rhetorical status. Computational Linguistics, 28(4):409–445. Han Zhang, Marcelo Fiszman, Dongwook Shin, Bartlomiej Wilkowski, and Thomas C. Rindflesch. 2013. Clustering cliques for graph-based summarization of the biomedical research literature. BMC Bioinformatics, 14:182. 450
2015
43
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 451–461, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Training a Natural Language Generator From Unaligned Data Ondˇrej Dušek and Filip Jurˇcíˇcek Charles University in Prague, Faculty of Mathematics and Physics Institute of Formal and Applied Linguistics Malostranské námˇestí 25, CZ-11800 Prague, Czech Republic {odusek,jurcicek}@ufal.mff.cuni.cz Abstract We present a novel syntax-based natural language generation system that is trainable from unaligned pairs of input meaning representations and output sentences. It is divided into sentence planning, which incrementally builds deep-syntactic dependency trees, and surface realization. Sentence planner is based on A* search with a perceptron ranker that uses novel differing subtree updates and a simple future promise estimation; surface realization uses a rule-based pipeline from the Treex NLP toolkit. Our first results show that training from unaligned data is feasible, the outputs of our generator are mostly fluent and relevant. 1 Introduction We present a novel approach to natural language generation (NLG) that does not require finegrained alignment in training data and uses deep dependency syntax for sentence plans. We include our first results on the BAGEL restaurant recommendation data set of Mairesse et al. (2010). In our setting, the task of a natural language generator is that of converting an abstract meaning representation (MR) into a natural language utterance. This corresponds to the sentence planning and surface realization NLG stages as described by Reiter and Dale (2000). It also reflects the intended usage in a spoken dialogue system (SDS), where the NLG component is supposed to translate a system output action into a sentence. While the content planning NLG stage has been used in SDS (e.g., Rieser and Lemon (2010)), we believe that deciding upon the contents of the system’s utterance is generally a task for the dialogue manager. We focus mainly on the sentence planning part in this work, and reuse an existing rule-based surface realizer to test the capabilities of the generator in an end-to-end setting. Current NLG systems usually require a separate training data alignment step (Mairesse et al., 2010; Konstas and Lapata, 2013). Many of them use a CFG or operate in a phrase-based fashion (Angeli et al., 2010; Mairesse et al., 2010), which limits their ability to capture long-range syntactic dependencies. Our generator includes alignment learning into sentence planner training and uses deep-syntactic trees with a rule-based surface realization step, which ensures grammatical correctness of the outputs. Unlike previous approaches to trainable sentence planning (e.g., Walker et al. (2001); Stent et al. (2004)), our generator does not require a handcrafted base sentence planner. This paper is structured as follows: in Section 2, we describe the architecture of our generator. Sections 3 and 4 then provide further details on its main components. In Section 5, we describe our experiments on the BAGEL data set, followed by an analysis of the results in Section 6. Section 7 compares our generator to previous related works and Section 8 concludes the paper. 2 Generator Architecture Our generator (see Figure 1) operates in two stages that roughly correspond to the traditional NLG stages of sentence planning and surface realization. In the first stage, a statistical sentence planner generates deep-syntactic dependency trees from the input meaning representation. These are converted into plain text sentences in the second stage by the (mostly rule-based) surface realizer. We use deep-syntax dependency trees to represent the sentence plan, i.e. the intermediate data structure between the two aforementioned stages. These are ordered dependency trees that only contain nodes for content words (nouns, full verbs, adjectives, adverbs) and coordinating conjunctions. 451 meaning representation (dialogue acts) Sentence planner A* search candidate generator scorer expand candidate sentence plan tree into new candidates score candidates to select next one to be expanded sentence plan (deep syntax tree) plain text sentence Surface realizer mostly rule-based pipeline (from Treex NLP toolkit) Word ordering Agreement Compound verb forms Grammatical words Punctuation Word Inflection Phonetic changes inform(name=X, type=placetoeat, eattype=restaurant, area=riverside, food=Italian) t-tree X-name n:subj be v:fin italian adj:attr restaurant n:obj river n:by+X X is an italian restaurant by the river. Figure 1: Overall structure of our generator Each node has a lemma and a formeme – a concise description of its surface morphosyntactic form, which may include prepositions and/or subordinate conjunctions (Dušek et al., 2012). This structure is based on the deep-syntax trees of the Functional Generative Description (Sgall et al., 1986), but it has been simplified to fit our purposes (see Figure 1 in the middle). There are several reasons for taking the traditional two-step approach to generation (as opposed to joint approaches, see Section 7) and using deep syntax trees as the sentence plan format: First, generating into deep syntax simplifies the task for the statistical sentence planner – the planner does not need to handle surface morphology and auxiliary words. Second, a rule-based syntactic realizer allows us to ensure grammatical correctness of the output sentences, which would be more difficult in a sequence-based and/or statistical approach.1 And third, a rule-based surface realizer from our sentence plan format is relatively easy to implement and can be reused for any domain within the same language. As in our case, it is also possible to reuse and/or adapt an existing surface realizer (see Section 4). Deep-syntax annotation of sentences in the training set is needed to train the sentence planner, but we assume automatic annotation and reuse an existing deep-syntactic analyzer from the Treex NLP framework (Popel and Žabokrtský, 2010).2 We use dialogue acts (DA) as defined in the BAGEL restaurant data set of Mairesse et al. (2010) as a MR in our experiments throughout this paper. Here, a DA consists of a dialogue act type, which is always “inform” in the set, and a list of slot-value pairs (SVPs) that contain information about a restaurant, such as food type or location (see the top of Figure 1). Our generator can be easily adapted to a different MR, though. 3 Sentence Planner The sentence planner is based on a variant of the A* algorithm (Hart et al., 1968; Och et al., 2001; Koehn et al., 2003). It starts from an empty sentence plan tree and tries to find a path to the optimal sentence plan by iteratively adding nodes. It keeps two sets of hypotheses, i.e., candidate sentence plan trees, sorted by their score – hypotheses to expand (open set) and already expanded (closed set). It uses the following two subcomponents to guide the search: • a candidate generator that is able to incrementally generate candidate sentence plan trees (see Section 3.1), • a scorer/ranker that scores the appropriateness of these trees for the input MR (see Section 3.2). 1This issue would become more pressing in languages with richer morphology than English. 2See http://ufal.mff.cuni.cz/treex. Domainindependent deep syntax analysis for several languages is included in this framework; the English pipeline used here involves a statistical part-of-speech tagger (Spoustová et al., 2007) and a dependency parser (McDonald et al., 2005), followed by a rule-based conversion to deep syntax trees. 452 t-tree be v:fin t-tree recommend v:fin t-tree serve v:fin t-tree be v:fin t-tree restaurant n:obj be v:fin t-tree be v:fin t-tree X-name n:subj be v:fin t-tree restaurant n:subj be v:fin t-tree X-name n:subj be v:fin t-tree X-name n:subj restaurant n:obj be v:fin t-tree X-name n:subj bar n:obj Original sentence plan tree: Its successors (selection): Figure 2: Candidate generator example inputs and outputs The basic workflow of the sentence planner algorithm then looks as follows: Init: Start from an open set with a single empty sentence plan tree and an empty closed set. Loop: 1. Select the best-scoring candidate C from the open set. Add C to closed set. 2. The candidate generator generates C, a set of possible successors to C. These are trees that have more nodes than C and are deemed viable. Note that C may be empty. 3. The scorer scores all successors in C and if they are not already in the closed set, it adds them to the open set. 4. Check if the best successor in the open set scores better than the best candidate in the closed set. Stop: The algorithm finishes if the top score in the open set is lower than the top score in the closed set for d consecutive iterations, or if there are no more candidates in the open set. It returns the best-scoring candidate from both sets. 3.1 Generating Sentence Plan Candidates Given a sentence plan tree, which is typically incomplete and may be even empty, the candidate generator generates its successors by adding one new node in all possible positions and with all possible lemmas and formemes (see Figure 2). While a naive implementation – trying out any combination of lemmas and formemes found in the training data – works in principle, it leads to an unmanageable number of candidate trees even for a very small domain. Therefore, we include several rules that limit the number of trees generated: 1. Lemma-formeme compatibility – only nodes with a combination of lemma and formeme seen in the training data are generated. 2. Syntactic viability – the new node must be compatible with its parent node (i.e., this combination, including the dependency left/right direction, must be seen in the training data). 3. Number of children – no node can have more children than the maximum for this lemmaformeme combination seen in the training data. 4. Tree size – the generated tree cannot have more nodes than trees seen in the training data. The same limitation applies to the individual depth levels – the training data limit the number of nodes on the n-th depth level as well as the maximum depth of any tree. This is further conditioned on the input SVPs – the maximums are only taken from training examples that contain the same SVPs that appear on the current input. 5. Weak semantic compatibility – we only include nodes that appear in the training data alongside the elements of the input DA, i.e., nodes that appear in training examples containing SVPs from the current input, 6. Strong semantic compatibility – for each node (lemma and formeme), we make a “compatibility list” of SVPs and slots that are present in all training data examples containing this node. We then only allow generating this node if all of them are present in the current input DA. To allow for more generalization, this rule can be applied just to lemmas 453 (disregarding formemes), and a certain number of SVPs/slots from the compatibility list may be required at maximum. Only Rules 4 (partly), 5, and 6 depend on the format of the input meaning representation. Using a different MR would require changing these rules to work with atomic substructures of the new MR instead of SVPs. While especially Rules 5 and 6 exclude a vast number of potential candidate trees, this limitation is still much weaker than using hard alignment links between the elements of the MR and the output words or phrases. It leaves enough room to generate many combinations unseen in the training data (cf. Section 6) while keeping the search space manageable. To limit the space of potential tree candidates even further, one could also use automatic alignment scores between the elements of the input MR and the tree nodes (obtained using a tool such as GIZA++ (Och and Ney, 2003)). 3.2 Scoring Sentence Plan Trees The scorer for the individual sentence plan tree candidates is a function that maps global features from the whole sentence plan tree t and the input MR m to a real-valued score that describes the fitness of t in the context of m. We first describe the basic version of the scorer and then our two improvements – differing subtree updates and future promise estimation. Basic perceptron scorer The basic scorer is based on the linear perceptron ranker of Collins and Duffy (2002), where the score is computed as a simple dot product of the features and the corresponding weight vector: score(t, m) = w⊤· feat(t, m) In the training phase, the weights w are initialized to one. For each input MR, the system tries to generate the best sentence plan tree given current weights, ttop. The score of this tree is then compared to the score of the correct goldstandard tree tgold.3 If ttop ̸= tgold and the gold-standard tree ranks worse than the generated one (score(ttop, m) > score(tgold, m)), the weight vector is updated by the feature value difference of 3Note that the “gold-standard” sentence plan trees are actually produced by automatic annotation. For the purposes of scoring, they are, however, treated as gold standard. the generated and the gold-standard tree: w = w + α · (feat(tgold, m) −feat(ttop, m)) where α is a predefined learning rate. Differing subtree updates In the basic version described above, the scorer is trained to score full sentence plan trees. However, it is also used to score incomplete sentence plans during the decoding. This leads to a bias towards bigger trees regardless of their fitness for the input MR. Therefore, we introduced a novel modification of the perceptron updates to improve scoring of incomplete sentence plans: In addition to updating the weights using the top-scoring candidate ttop and the gold-standard tree tgold (see above), we also use their differing subtrees ti top, ti gold for additional updates. Starting from the common subtree tc of ttop and tgold, pairs of differing subtrees ti top, ti gold are created by gradually adding nodes from ttop into ti top and from tgold into ti gold (see Figure 3). To maintain the symmetry of the updates in case that the sizes of ttop and tgold differ, more nodes may be added in one step.4 The additional updates then look as follows: t0 top = t0 gold = tc for i in 1, . . . min{|ttop| −|tc|, |tgold| −|tc|} −1 : ti top = ti−1 top + node(s) from ttop ti gold = ti−1 gold + node(s) from tgold w = w + α · (feat(ti gold, m) −feat(ti top, m)) Future promise estimation To further improve scoring of incomplete sentence plan trees, we incorporate a simple future promise estimation for the A* search intended to boost scores of sentence plans that are expected to further grow.5 It is based on the expected number of children Ec(n) of different node types (lemmaformeme pairs).6 Given all nodes n1 . . . n|t| in a 4For example, if tgold has 6 more nodes than tc and ttop has 4 more, there will be 3 pairs of differing subtrees, with ti gold having 2, 4, and 5 more nodes than tc and ti top having 1, 2, and 3 more nodes than tc. We have also evaluated a variant where both sets of subtrees ti gold, ti top were not equal in size, but this resulted in degraded performance. 5Note that this is not the same as future path cost in the original A* path search, but it plays an analogous role: weighing hypotheses of different size. 6Ec(n) is measured as the average number of children over all occurrences of the given node type in the training data. It is expected to be domain-specific. 454 t-tree X n:subj be v:fin restaurant n:obj moderate adj:attr price n:attr range n:in+X Gold standard tgold: cheap adj:attr italian adj:attr t-tree X n:subj be v:fin restaurant n:obj Top generated ttop: t-tree X n:subj be v:fin restaurant n:obj Common subtree tc: Differing subtrees for update: t-tree X n:subj be v:fin restaurant n:obj price n:attr range + cheap adj:attr t-tree X n:subj be v:fin restaurant n:obj t1 gold t1 top Figure 3: An example of differing subtrees The gold standard tree tgold has three more nodes than the common subtree tc, while the top generated tree ttop has two more. Only one pair of differing subtrees t1 gold, t1 top is built, where two nodes are added into t1 gold and one node into t1 top. sentence plan tree t, the future promise is computed in the following way: fp = λ · X w · |t| X i=1 max{0, Ec(ni) −c(ni)} where c(ni) is the current number of children of node ni, λ is a preset weight parameter, and P w is the sum of the current perceptron weights. Multiplying by the weights sum makes future promise values comparable to trees scores. Future promise is added to tree scores throughout the tree generation process, but it is disregarded for the termination criterion in the Stop step of the generation algorithm and in perceptron weight updates. Averaging weights and parallel training To speed up training using parallel processing, we use the iterative parameter mixing approach of McDonald et al. (2010), where training data are split into several parts and weight updates are averaged after each pass through the training data. Following Collins (2002), we record the weights after each training pass, take an average at the end, and use this as the final weights for prediction. 4 Surface Realizer We use the English surface realizer from the Treex NLP toolkit (cf. Section 2 and (Ptáˇcek, 2008)). It is a simple pipeline of mostly rule-based blocks that gradually change the deep-syntactic trees into surface dependency trees, which are then linearized to sentences. It includes the following steps: • Agreement – morphological attributes of some nodes are deduced based on agreement with other nodes (such as in subject-predicate agreement). • Word ordering – the input trees are already ordered, so only a few rules for grammatical words are applied. • Compound verb forms – additional verbal nodes are added for verbal particles (infinitive or phrasal verbs) and for compound expressions of tense, mood, and modality. • Grammatical words – prepositions, subordinating conjunctions, negation particles, articles, and other grammatical words are added into the sentence. • Punctuation – nodes for commas, final punctuation, quotes, and brackets are introduced. • Word Inflection – words are inflected according to the information from formemes and agreement. • Phonetic changes – English “a” becomes “an” based on the following word. The realizer is designed as domain-independent and handles most English grammatical phenomena. A simple “round-trip” test – using automatic analysis with subsequent generation – reached a BLEU score (Papineni et al., 2002) of 89.79% against the original sentences on the whole BAGEL data set, showing only minor differences between the input sentence and generation output (mostly in punctuation). 455 restaurant n:obj X-area n:in+X and x X-area n:in+X restaurant n:obj X-area n:in+X and x X-area n:in+X Figure 4: Coordination structures conversion: original (left) and our format (right). 5 Experimental Setup Here we describe the data set used in our experiments, the needed preprocessing steps, and the settings of our generator specific to the data set. 5.1 Data set We performed our experiments on the BAGEL data set of Mairesse et al. (2010), which fits our usage scenario in a spoken dialogue system and is freely available.7 It contains a total of 404 sentences from a restaurant information domain (describing the restaurant location, food type, etc.), which correspond to 202 dialogue acts, i.e., each dialogue act has two paraphrases. Restaurant names, phone numbers, and other “non-enumerable” properties are abstracted – replaced by an “X” symbol – throughout the generation process. Note that while the data set contains alignment of source SVPs to target phrases, we do not use it in our experiments. For sentence planner training, we automatically annotate all the sentences using the Treex deep syntactic analyzer (see Section 2). The annotation obtained from the Treex analyzer is further simplified for the sentence planner in two ways: • Only lemmas and formemes are used in the sentence planner. Other node attributes are added in the surface realization step (see Section 5.2). • We convert the representation of coordination structures into a format inspired by Universal Dependencies.8 In the original Treex annotation style, the conjunction heads both conjuncts, whereas in our modification, the first 7Available for download at: http://farm2.user. srcf.net/research/bagel/. 8http://universaldependencies.github.io conjunct is at the top, heading the coordination and the second conjunct (see Figure 4). The coordinations can be easily converted back for the surface realizer, and the change makes the task easier for the sentence planner: it may first generate one node and then decide whether it will add a conjunction and a second conjunct. 5.2 Generator settings In our candidate generator, we use all the limitation heuristics described in Section 3.1. For strong semantic compatibility (Rule 6), we use just lemmas and require at most 5 SVPs/slots from the lemma’s compatibility list in the input DA. We use the following feature types for our sentence planner scorer: • current tree properties – tree depth, total number of nodes, number of repeated nodes • tree and input DA – number of nodes per SVP and number of repeated nodes per repeated SVP, • node features – lemma, formeme, and number of children of all nodes in the current tree, and combinations thereof, • input features – whole SVPs (slot + value), just slots, and pairs of slots in the DA, • combinations of node and input features, • repeat features – occurrence of repeated lemmas and/or formemes in the current tree combined with repeated slots in the input DA, • dependency features – parent-child pairs for lemmas and/or formemes, including and excluding their left-right order, • sibling features – sibling pairs for lemmas and/or formemes, also combined with SVPs, • bigram features – pairs of lemmas and/or formemes adjacent in the tree’s left-right order, also combined with SVPs. All feature values are normalized to have a mean of 0 and a standard deviation of 1, with normalization coefficients estimated from training data. The feature set can be adapted for a different MR format – it only must capture all important parts of the MR, e.g., for a tree-like MR, the nodes and edges, and possibly combinations thereof. 456 Setup BLEU for training portion NIST for training portion 10% 20% 30% 50% 100% 10% 20% 30% 50% 100% Basic perc. 46.90 52.81 55.43 54.53 54.24 4.295 4.652 4.669 4.758 4.643 + Diff-tree upd. 44.16 50.86 53.61 55.71 58.70 3.846 4.406 4.532 4.674 4.876 + Future promise 37.25 53.57 53.80 58.15 59.89 3.331 4.549 4.607 5.071 5.231 Table 1: Evaluation on the BAGEL data set (averaged over all ten cross-validation folds) “Training portion” denotes the percentage of the training data used in the experiment. “Basic perc.” = basic perceptron updates, “+ Diff-tree upd.” = with differing subtree perceptron updates, “+ Future promise” = with future promise estimation. BLEU scores are shown as percentages. Based on our preliminary experiments, we use 100 passes over the training data and limit the number of iterations d that do not improve score to 3 for training and 4 for testing. We use a hard maximum of 200 sentence planner iterations per input DA. The learning rate α is set to 0.1. We use training data parts of 36 or 37 training examples (1/10th of the full training set) in parallel training. If future promise is used, its weight λ is set to 0.3. The Treex English realizer expects not only lemmas and formemes, but also additional grammatical attributes for all nodes. In our experiments, we simply use the most common values found in the training data for the particular nodes as this is sufficient for our domain. In larger domains, some of these attributes may have to be also included in sentence plans. 6 Results Same as Mairesse et al. (2010), we use 10-fold cross-validation where DAs seen at training time are never used for testing, i.e., both paraphrases or none of them are present in the full training set. We evaluate using BLEU and NIST scores (Papineni et al., 2002; Doddington, 2002) against both reference paraphrases for a given test DA. The results of our generator are shown in Table 1, both for standard perceptron updates and our improvements – differing subtree updates and future promise estimation (see Section 3.2). Our generator did not achieve the same performance as that of Mairesse et al. (2010) (ca. 67%).9 However, our task is substantially harder since the generator also needs to learn the alignment of phrases to SVPs and determine whether all required information is present on the output (see also Section 7). Our differing tree updates clearly bring a substantial improvement over standard per9Mairesse et al. (2010) do not give a precise BLEU score number in their paper, they only show the values in a graph. ceptron updates, and scores keep increasing with bigger amounts of training data used, whereas with plain perceptron updates, the scores stay flat. The increase with 100% is smaller since all training DAs are in fact used twice, each time with a different paraphrase.10 A larger training set with different DAs should bring a bigger improvement. Using future promise estimation boosts the scores even further, by a smaller amount for BLEU but noticeably for NIST. Both improvements on the full training set are considered statistically significant at 95% confidence level by the paired bootstrap resampling test (Koehn, 2004). A manual inspection of a small sample of the results confirmed that the automatic scores reflect the quality of the generated sentences well. If we look closer at the generated sentences (see Table 2), it becomes clear that the generator learns to produce meaningful utterances which mostly correspond well to the input DA. It is able to produce original paraphrases and generalizes to previously unseen DAs. On the other hand, not all required information is always present, and some facts are sometimes repeated or irrelevant information appears. This mostly happens with input slot-value pairs that occur only rarely in the training data; we believe that a larger training set will solve this problem. Alternatively, one could introduce additional scorer features to discourage conflicting information. Another problem is posed by repeated slots in the input DA, which are sometimes not reflected properly in the generated sentence. This suggests that a further refinement of the scorer feature set may be needed. 10We used the two paraphrases that come with each DA as independent training instances. While having two different gold-standard outputs for a single input is admittedly not ideal for a discriminative learner, it still brings an improvement in our case. 457 Input DA inform(name=X-name, type=placetoeat, eattype=restaurant, near=X-near, food=Continental, food=French) Reference X is a French and continental restaurant near X. Generated X is a French and continental restaurant near X. Input DA inform(name=X-name, type=placetoeat, area=riverside, near=X-near, eattype=restaurant) Reference X restaurant is near X on the riverside. Generated X is a restaurant in the riverside area near X. Input DA inform(name=X-name, type=placetoeat, area=X-area, pricerange=moderate, eattype=restaurant) Reference X is a moderately priced restaurant in X. Generated X is a restaurant in the X area. Input DA inform(name=X-name, type=placetoeat, eattype=restaurant, area=riverside, food=French) Reference X is a French restaurant on the riverside. Generated X is a French restaurant in the riverside area which serves French food. Input DA inform(name=X-name, type=placetoeat, eattype=restaurant, pricerange=moderate, area=X-area, food=Contemporary, food=English) Reference X is a moderately priced English contemporary restaurant in X. Generated X is an English restaurant in the X area which serves expensive food in the moderate price range located in X. Input DA inform(name=X-name, type=placetoeat, eattype=restaurant, area=citycentre, near=X-near, food="Chinese takeaway", food=Japanese) Reference X is a Chinese takeaway and Japanese restaurant in the city centre near X. Generated X is a Japanese restaurant in the centre of town near X and X. Input DA inform(name=X-name, type=placetoeat, pricerange=moderate, eattype=restaurant) Reference X is a restaurant that offers moderate price range. Generated X is a restaurant in the moderate price range. Table 2: Example generated sentences Sentences generated when training on the full set and using differing subtree updates and future promise estimation. 7 Related Work Previous trainable methods in sentence planning use in principle two techniques: First, in the overgeneration and ranking approach (Walker et al., 2001; Stent et al., 2004), many sentence plans are generated using a rule-based planner and then the best one is selected by a statistical ranker. Second, parameter optimization trains adjustable parameters of a handcrafted generator to produce outputs with desired properties (Paiva and Evans, 2005; Mairesse and Walker, 2008). As opposed to our approach, both methods require an existing handcrafted sentence planner. Other previous works combine sentence planning and surface realization into a single step and do not require a handcrafted base module. Wong and Mooney (2007) experiment with a phrasebased machine translation system, comparing and combining it with an inverted semantic parser based on synchronous context-free grammars. Lu et al. (2009) use tree conditional random fields over hybrid trees that combine natural language phrases with formal semantic expressions. Angeli et al. (2010) generate text from database records through a sequence of classifiers, gradually selecting database records, fields, and corresponding textual realizations to describe them. Konstas and Lapata (2013) recast the whole NLG problem as parsing over a probabilistic context-free grammar estimated from database records and their descriptions. Mairesse et al. (2010) convert input DAs into “semantic stacks”, which correspond to natural language phrases and contain slots and their values on top of each other. Their generation model uses two dynamic Bayesian networks: the first one performs an ordering of the input semantic stacks, inserting intermediary stacks which correspond to grammatical phrases, the second one then produces a concrete surface realization. Dethlefs et al. (2013) approach generation as a sequence labeling task and use a conditional random field classifier, assigning a word or a phrase to each input MR element. Unlike our work, the joint approaches typically include the alignment of input MR elements to output words in a separate preprocessing step (Wong and Mooney, 2007; Angeli et al., 2010), or require pre-aligned training data (Mairesse et al., 2010; Dethlefs et al., 2013). In addition, their basic algorithm often requires a specific input MR format, e.g., a tree (Wong and Mooney, 2007; Lu et al., 2009) or a flat database (Angeli et al., 2010; Konstas and Lapata, 2013; Mairesse et al., 2010). While dependency-based deep syntax has been used previously in statistical NLG, the approaches known to us (Bohnet et al., 2010; Belz et al., 2012; Ballesteros et al., 2014) focus only on the surface realization step and do not include a sentence plan458 ner, whereas our work is mainly focused on statistical sentence planning and uses a rule-based realizer. Our approach to sentence planning is most similar to Zettlemoyer and Collins (2007), which use a candidate generator and a perceptron ranker for CCG parsing. Apart from proceeding in the inverse direction and using dependency trees, we use only very generic rules in our candidate generator instead of language-specific ones, and we incorporate differing subtree updates and future promise estimation into our ranker. 8 Conclusions and Further Work We have presented a novel natural language generator, capable of learning from unaligned pairs of input meaning representation and output utterances. It consists of a novel, A*-search-based sentence planner and a largely rule-based surface realizer from the Treex NLP toolkit. The sentence planner is, to our knowledge, first to use dependency syntax and learn alignment of semantic elements to words or phrases jointly with sentence planning. We tested our generator on the BAGEL restaurant information data set of Mairesse et al. (2010). We have achieved very promising results, the utterances produced by our generator are mostly fluent and relevant. They did not surpass the BLEU score of the original authors; however, our task is substantially harder as our generator does not require fine-grained alignments on the input. Our novel feature of the sentence planner ranker – using differing subtrees for perceptron weight updates – has brought a significant performance improvement. The generator source code, along with configuration files for experiments on the BAGEL data set, is available for download on Github.11 In future work, we plan to evaluate our generator on further domains, such as geographic information (Kate et al., 2005), weather reports (Liang et al., 2009), or flight information (Dahl et al., 1994). In order to improve the performance of our generator and remove the dependency on domainspecific features, we plan to replace the perceptron ranker with a neural network. We also want to experiment with removing the dependency on the Treex surface realizer by generating directly into dependency trees or structures into which de11https://github.com/UFAL-DSG/tgen pendency trees can be converted in a languageindependent way. Acknowledgments This work was funded by the Ministry of Education, Youth and Sports of the Czech Republic under the grant agreement LK11221 and core research funding, SVV project 260 104, and GAUK grant 2058214 of Charles University in Prague. It used language resources stored and distributed by the LINDAT/CLARIN project of the Ministry of Education, Youth and Sports of the Czech Republic (project LM2010013). The authors would like to thank Lukáš Žilka, Ondˇrej Plátek, and the anonymous reviewers for helpful comments on the draft. References G. Angeli, P. Liang, and D. Klein. 2010. A simple domain-independent probabilistic approach to generation. In Proc. of the 2010 Conference on Empirical Methods in Natural Language Processing, page 502–512. M. Ballesteros, S. Mille, and L. Wanner. 2014. Classifiers for data-driven deep sentence generation. In Proceedings of the 8th International Natural Language Generation Conference, pages 108–112, Philadelphia. A. Belz, B. Bohnet, S. Mille, L. Wanner, and M. White. 2012. The Surface Realisation Task: Recent Developments and Future Plans. In INLG 2012, pages 136–140. B. Bohnet, L. Wanner, S. Mille, and A. Burga. 2010. Broad coverage multilingual deep sentence generation with a stochastic multi-level realizer. In Proc. of the 23rd International Conference on Computational Linguistics, page 98–106. M. Collins and N. Duffy. 2002. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, page 263–270, Stroudsburg, PA, USA. Association for Computational Linguistics. M. Collins. 2002. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, page 1–8. Association for Computational Linguistics. D. A. Dahl, M. Bates, M. Brown, W. Fisher, K. Hunicke-Smith, D. Pallett, E. Rudnicky, and E. Shriberg. 1994. Expanding the scope of the ATIS 459 task: the ATIS-3 corpus. In in Proc. ARPA Human Language Technology Workshop ’92, Plainsboro, NJ, pages 43–48. Morgan Kaufmann. N. Dethlefs, H. Hastie, H. Cuayáhuitl, and O. Lemon. 2013. Conditional Random Fields for Responsive Surface Realisation using Global Features. In Proceedings of ACL, Sofia. G. Doddington. 2002. Automatic evaluation of machine translation quality using N-gram cooccurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, pages 138–145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. O. Dušek, Z. Žabokrtský, M. Popel, M. Majliš, M. Novák, and D. Mareˇcek. 2012. Formemes in English-Czech deep syntactic MT. In Proceedings of the Seventh Workshop on Statistical Machine Translation, page 267–274, Montreal. P. E. Hart, N. J. Nilsson, and B. Raphael. 1968. A formal basis for the heuristic determination of minimum cost paths. IEEE Transactions on Systems Science and Cybernetics, 4(2):100–107. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the National Conference on Artificial Intelligence, volume 20. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. P. Koehn, F. J. Och, and D. Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL-HLT - Volume 1, page 48–54, Stroudsburg, PA, USA. Association for Computational Linguistics. P. Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of EMNLP, page 388–395. I. Konstas and M. Lapata. 2013. A global model for concept-to-text generation. Journal of Artificial Intelligence Research, 48:305–346. P. Liang, M. I. Jordan, and D. Klein. 2009. Learning semantic correspondences with less supervision. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, page 91–99. W. Lu, H. T. Ng, and W. S. Lee. 2009. Natural language generation with tree conditional random fields. In Proc. of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1-Volume 1, page 400–409. F. Mairesse and M. Walker. 2008. Trainable generation of big-five personality styles through datadriven parameter estimation. In Proc. of the 46th Annual Meeting of the ACL (ACL), page 165–173. F. Mairesse, M. Gaši´c, F. Jurˇcíˇcek, S. Keizer, B. Thomson, K. Yu, and S. Young. 2010. Phrase-based statistical language generation using graphical models and active learning. In Proc. of the 48th Annual Meeting of the ACL, page 1552–1561. R. McDonald, F. Pereira, K. Ribarov, and J. Hajiˇc. 2005. Non-projective dependency parsing using spanning tree algorithms. In Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing, page 523–530. R. McDonald, K. Hall, and G. Mann. 2010. Distributed training strategies for the structured perceptron. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 456–464. Association for Computational Linguistics. F. J. Och and H. Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. F. J. Och, N. Ueffing, and H. Ney. 2001. An efficient A* search algorithm for statistical machine translation. In Proceedings of the Workshop on Datadriven Methods in Machine Translation - Volume 14, page 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. D. S. Paiva and R. Evans. 2005. Empirically-based control of natural language generation. In Proc. of the 43rd Annual Meeting of ACL, page 58–65, Stroudsburg, PA, USA. ACL. K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computational Linguistics, page 311–318. M. Popel and Z. Žabokrtský. 2010. TectoMT: modular NLP framework. In Proceedings of IceTAL, 7th International Conference on Natural Language Processing, page 293–304, Reykjavík. J. Ptáˇcek. 2008. Two tectogrammatical realizers side by side: Case of English and Czech. In Fourth International Workshop on Human-Computer Conversation, Bellagio, Italy. E. Reiter and R. Dale. 2000. Building Natural Language Generation Systems. Cambridge Univ. Press. V. Rieser and O. Lemon. 2010. Natural language generation as planning under uncertainty for spoken dialogue systems. In Empirical methods in natural language generation, page 105–120. P. Sgall, E. Hajiˇcová, and J. Panevová. 1986. The meaning of the sentence in its semantic and pragmatic aspects. D. Reidel, Dordrecht. 460 D. J. Spoustová, J. Hajiˇc, J. Votrubec, P. Krbec, and P. Kvˇetoˇn. 2007. The Best of Two Worlds: Cooperation of Statistical and Rule-based Taggers for Czech. In Proceedings of the Workshop on BaltoSlavonic Natural Language Processing: Information Extraction and Enabling Technologies, pages 67–74. Association for Computational Linguistics. A. Stent, R. Prasad, and M. Walker. 2004. Trainable sentence planning for complex information presentation in spoken dialog systems. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 79–86. M. A. Walker, O. Rambow, and M. Rogati. 2001. SPoT: a trainable sentence planner. In Proc. of 2nd meeting of NAACL, page 1–8, Stroudsburg, PA, USA. ACL. Y. W. Wong and R. J. Mooney. 2007. Generation by inverting a semantic parser that uses statistical machine translation. In Proc. of Human Language Technologies: The Conference of the North American Chapter of the ACL (NAACL-HLT-07), page 172–179. L. S. Zettlemoyer and M. Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 678–687, Prague. 461
2015
44
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 462–472, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Event-Driven Headline Generation Rui Sun†, Yue Zhang‡, Meishan Zhang‡ and Donghong Ji† † Computer School, Wuhan University, China ‡ Singapore University of Technology and Design {ruisun, dhji}@whu.edu.cn {yue zhang, meishan zhang}@sutd.edu.sg Abstract We propose an event-driven model for headline generation. Given an input document, the system identifies a key event chain by extracting a set of structural events that describe them. Then a novel multi-sentence compression algorithm is used to fuse the extracted events, generating a headline for the document. Our model can be viewed as a novel combination of extractive and abstractive headline generation, combining the advantages of both methods using event structures. Standard evaluation shows that our model achieves the best performance compared with previous state-of-the-art systems. 1 Introduction Headline generation (HG) is a text summarization task, which aims to describe an article (or a set of related paragraphs) using a single short sentence. The task is useful in a number of practical scenarios, such as compressing text for mobile device users (Corston-Oliver, 2001), generating table of contents (Erbs et al., 2013), and email summarization (Wan and McKeown, 2004). This task is challenging in not only informativeness and readability, which are challenges to common summarization tasks, but also the length reduction, which is unique for headline generation. Previous headline generation models fall into two main categories, namely extractive HG and abstractive HG (Woodsend et al., 2010; Alfonseca et al., 2013). Both consist of two steps: candidate extraction and headline generation. Extractive models choose a set of salient sentences in candidate extraction, and then exploit sentence compression techniques to achieve headline generation (Dorr et al., 2003; Texts Phrases Events Sentences Candidate Ranking Candidate #1 ... Candidate #i ... Candidate #K Multi-Sentence Compression Headline Candidate Extraction Headline Generation Figure 1: System framework. Zajic et al., 2005). Abstractive models choose a set of informative phrases for candidate extraction, and then exploit sentence synthesis techniques for headline generation (Soricut and Marcu, 2007; Woodsend et al., 2010; Xu et al., 2010). Extractive HG and abstractive HG have their respective advantages and disadvantages. Extractive models can generate more readable headlines, because the final title is derived by tailoring human-written sentences. However, extractive models give less informative titles (Alfonseca et al., 2013), because sentences are very sparse, making high-recall candidate extraction difficult. In contrast, abstractive models use phrases as the basic processing units, which are much less sparse. However, it is more difficult for abstractive HG to ensure the grammaticality of the generated titles, given that sentence synthesis is still very inaccurate based on a set of phrases with little grammatical information (Zhang, 2013). In this paper, we propose an event-driven model for headline generation, which alleviates the 462 disadvantages of both extractive and abstractive HG. The framework of the proposed model is shown in Figure 1. In particular, we use events as the basic processing units for candidate extraction. We use structured tuples to represent the subject, predicate and object of an event. This form of event representation is widely used in open information extraction (Fader et al., 2011; Qiu and Zhang, 2014). Intuitively, events can be regarded as a trade-off between sentences and phrases. Events are meaningful structures, containing necessary grammatical information, and yet are much less sparse than sentences. We use salience measures of both sentences and phrases for event extraction, and thus our model can be regarded as a combination of extractive and abstractive HG. During the headline generation step, A graphbased multi-sentence compression (MSC) model is proposed to generate a final title, given multiple events. First a directed acyclic word graph is constructed based on the extracted events, and then a beam-search algorithm is used to find the best title based on path scoring. We conduct experiments on standard datasets for headline generation. The results show that headline generation can benefit not only from exploiting events as the basic processing units, but also from the proposed graph-based MSC model. Both our candidate extraction and headline generation methods outperform competitive baseline methods, and our model achieves the best results compared with previous state-of-the-art systems. 2 Background Previous extractive and abstractive models take two main steps, namely candidate extraction and headline generation. Here, we introduce these two types of models according to the two steps. 2.1 Extractive Headline Generation Candidate Extraction. Extractive models exploit sentences as the basic processing units in this step. Sentences are ranked by their salience according to specific strategies (Dorr et al., 2003; Erkan and Radev, 2004; Zajic et al., 2005). One of the stateof-the-art approaches is the work of Erkan and Radev (2004), which exploits centroid, position and length features to compute sentence salience. We re-implemented this method as our baseline sentence ranking method. In this paper, we use SentRank to denote this method. Headline Generation. Given a set of sentences, extractive models exploit sentence compression techniques to generate a final title. Most previous work exploits single-sentence compression (SSC) techniques. Dorr et al. (2003) proposed the Hedge Trimmer algorithm to compress a sentence by making use of handcrafted linguistically-based rules. Alfonseca et al. (2013) introduce a multi-sentence compression (MSC) model into headline generation, using it as a baseline in their work. They indicated that the most important information is distributed across several sentences in the text. 2.2 Abstractive Headline Generation Candidate Extraction. Different from extractive models, abstractive models exploit phrases as the basic processing units. A set of salient phrases are selected according to specific principles during candidate extraction (Schwartz, 01; Soricut and Marcu, 2007; Xu et al., 2010; Woodsend et al., 2010). Xu et al. (2010) propose to rank phrases using background knowledge extracted from Wikipedia. Woodsend et al. (2010) use supervised models to learn the salience score of each phrase. Here, we use the work of Soricut and Marcu (2007) , namely PhraseRank, as our baseline phrase ranking method, which is an unsupervised model without external resources. The method exploits unsupervised topic discovery to find a set of salient phrases. Headline Generation. In the headline generation step, abstractive models exploit sentence synthesis technologies to accomplish headline generation. Zajic et al. (2005) exploit unsupervised topic discovery to find key phrases, and use the Hedge Trimmer algorithm to compress candidate sentences. One or more key phrases are added into the compressed fragment according to the length of the headline. Soricut and Marcu (2007) employ WIDL-expressions to generate headlines. Xu et al. (2010) employ keyword clustering based on several bag-of-words models to construct a headline. Woodsend et al. (2010) use quasi-synchronous grammar (QG) to optimize phrase selection and surface realization preferences jointly. 463 3 Our Model Similar to extractive and abstractive models, the proposed event-driven model consists of two steps, namely candidate extraction and headline generation. 3.1 Candidate Extraction We exploit events as the basic units for candidate extraction. Here an event is a tuple (S, P, O), where S is the subject, P is the predicate and O is the object. For example, for the sentence “Ukraine Delays Announcement of New Government”, the event is (Ukraine, Delays, Announcement). This type of event structures has been used in open information extraction (Fader et al., 2011), and has a range of NLP applications (Ding et al., 2014; Ng et al., 2014). A sentence is a well-formed structure with complete syntactic information, but can contain redundant information for text summarization, which makes sentences very sparse. Phrases can be used to avoid the sparsity problem, but with little syntactic information between phrases, fluent headline generation is difficult. Events can be regarded as a trade-off between sentences and phrases. They are meaningful structures without redundant components, less sparse than sentences and containing more syntactic information than phrases. In our system, candidate event extraction is performed on a bipartite graph, where the two types of nodes are lexical chains (Section 3.1.2) and events (Section 3.1.1), respectively. Mutual Reinforcement Principle (Zha, 2002) is applied to jointly learn chain and event salience on the bipartite graph for a given input. We obtain the top-k candidate events by their salience measures. 3.1.1 Extracting Events We apply an open-domain event extraction approach. Different from traditional event extraction, for which types and arguments are predefined, open event extraction does not have a closed set of entities and relations (Fader et al., 2011). We follow Hu’s work (Hu et al., 2013) to extract events. Given a text, we first use the Stanford dependency parser1 to obtain the Stanford typed dependency structures of the sentences (Marneffe and Manning, 2008). Then we focus on 1http://nlp.stanford.edu/software/lex-parser.shtml DT NNPS MD VB DT NNP NNP POS NNS the Keenans could demand the Aryan Nations ’ assets nsubj aux dobj det nn poss Figure 2: Dependency tree for the sentence “the Keenans could demand the Aryan Nations’ assets”. two relations, nsubj and dobj, for extracting event arguments. Event arguments that have the same predicate are merged into one event, represented by tuple (Subject, Predicate, Object). For example, given the sentence, “the Keenans could demand the Aryan Nations’ assets”, Figure 2 present its partial parsing tree. Based on the parsing results, two event arguments are obtained: nsubj(demand, Keenans) and dobj(demand, assets). The two event arguments are merged into one event: (Keenans, demand, assets). 3.1.2 Extracting Lexical Chains Lexical chains are used to link semanticallyrelated words and phrases (Morris and Hirst, 1991; Barzilay and Elhadad, 1997). A lexical chain is analogous to a semantic synset. Compared with words, lexical chains are less sparse for event ranking. Given a text, we follow Boudin and Morin (2013) to construct lexical chains based on the following principles: 1. All words that are identical after stemming are treated as one word; 2. All NPs with the same head word fall into one lexical chain;2 3. A pronoun is added to the corresponding lexical chain if it refers to a word in the chain (The coreference resolution is performed using the Stanford Coreference Resolution system);3 4. Lexical chains are merged if their main words are in the same synset of WordNet.4 2NPs are extracted according to the dependency relations nn and amod. As shown in Figure 2, we can extract the noun phrase Aryan Nations according to the dependency relation nn(Nations, Aryan). 3http://nlp.stanford.edu/software/dcoref.shtml 4http://wordnet.princeton.edu/ 464 At initialization, each word in the document is a lexical chain. We repeatedly merge existing chains by the four principles above until convergence. In particular, we focus on content words only, including verbs, nouns and adjective words. After the merging, each lexical chain represents a word cluster, and the first occuring word in it can be used as the main word of chain. 3.1.3 Learning Salient Events Intuitively, one word should be more important if it occurs in more important events. Similarly, one event should be more important if it includes more important words. Inspired by this, we construct a bipartite graph between lexical chains and events, shown in Figure 3, and then exploit MRP to jointly learn the salience of lexical chains and events. MRP has been demonstrated effective for jointly learning the vertex weights of a bipartite graph (Zhang et al., 2008; Ventura et al., 2013). Given a text, we construct bipartite graph between the lexical chains and events, with an edge being constructed between a lexical chain and an event if the event contains a word in the lexical chain. Suppose that there are n events {e1, · · · , en} and m lexical chains: {l1, · · · , lm} in the bipartite graph Gbi. Their scores are represented by sal(e) = {sal(e1), · · · , sal(en)} and sal(l) = {sal(l1), · · · , sal(lm)}, respectively. We compute the final sal(e) and sal(l) iteratively by MRP. At each step, sal(ei) and sal(lj) are computed as follows: sal(ei) ∝ m X j=1 rij × sal(lj) sal(lj) ∝ n X i=1 rij × sal(ei) rij = P (lj,ei)∈Gbi w(lj) · w(ei) A (1) where rij ∈R denotes the cohesion between lexicon chain li and event ej, A is a normalization factor, sal(·) denotes the salience, and the initial values of sal(e) and sal(t) can be assigned randomly. The remaining problem is how to define the salience score of a given lexicon chain li and a given event ej. In this work, we use the guidance of abstractive and extractive models to compute Lexical Chains Events Figure 3: Bipartite graph where two vertex sets denote lexical chains and events, respectively. sal(lj) and sal(ei), respectively, as shown below: w(lj) = X w∈lj salabs(w) w(ei) = X s∈Sen(ei) salext(s) (2) where salabs(·) denotes the word salience score of an abstractive model, salext(·) denotes the sentence salience score of an extractive model, and Sen(ei) denotes the sentence set where ei is extracted from. We exploit our baseline sentence ranking method, SentRank, to obtain the sentence salience score, and use our baseline phrase ranking method, PhraseRank, to obtain the phrase salience score. 3.2 Headline Generation We use a graph-based multi-sentence compression (MSC) model to generate the final title for the proposed event-driven model. The model is inspired by Filippova (2010). First, a weighted directed acyclic word graph is built, with a start node and an end node in the graph. A headline can be obtained by any path from the start node to the end node. We measure each candidate path by a scoring function. Based on the measurement, we exploit a beam-search algorithm to find the optimum path. 3.2.1 Word-Graph Construction Given a set of candidate events CE, we extract all the sentences that contain the events. In particular, we add two artificial words, ⟨S⟩and ⟨E⟩, to the start position and end position of all sentences, respectively. Following Filippova (2010), we extract all words in the sentences as graph vertexes, and then construct edges based on these words. Filippova (2010) adds edges 465 ⟨S⟩ ⟨E⟩ King Norodom ... opposition groups ... Hun Sun on ... rejected party ... ... for talks ... ... ... Figure 4: Word graph generated from candidates and a possible compression path. for all the word pairs that are adjacent in one sentence. The title generated using this strategy can mistakenly contain common word bigrams( i.e. adjacent words) in different sentences. To address this, we change the strategy slightly, by adding edges for all word pairs of one sentence in the original order. In another words, if word wj occurs after wi in one sentence, then we add an edge wi →wj for the graph. Figure 4 gives an example of the word graph. The search space of the graph is larger compared with that of Filippova (2010) because of more added edges. Different from Filippova (2010), salience information is introduced into the calculation of the weights of vertexes. One word that occurs in more salient candidate should have higher weight. Given a graph G = (V, E), where V = {V1, · · · , Vn} denotes the word nodes and E = {Eij ∈{0, 1}, i, j ∈[1, n]} denotes the edges. The vertex weight is computed as follows: w(Vi) = X e∈CE sal(e) exp{−dist(Vi.w, e)} (3) where sal(e) is the salience score of an event from the candidate extraction step, Vi.w denotes the word of vertex Vi, and dist(w, e) denotes the distance from the word w to the event e, which are defined by the minimum distance from w to all the related words of e in a sentence by the dependency path5 between them. Intuitively, equation 3 demonstrates that a vertex is salient when its corresponding word is close to salient 5The distance is +∞when e and w are not in one sentence. events. It is worth noting that the formula can adapt to extractive and abstractive models as well, by replacing events with sentences and phrases. We use them for the SentRank and PhraseRank baseline systems in Section 4.3, respectively. The equation to compute the edge weight is adopted from Filippova (2010): w′(Eij) = X s rdist(Vi.w, Vj.w) w(Eij) = w(Vi)w(Vj) · w′(Eij) w(Vi) + w(Vj) (4) where w′(Eij) refers to the sum of rdist(Vi.w, Vj.w) over all sentences, and rdist(·) denotes the reciprocal distance of two words in a sentence by the dependency path. By the formula, an edge is salient when the corresponding vertex weights are large or the corresponding words are close. 3.2.2 Scoring Method The key to our MSC model is the path scoring function. We measure a candidate path based on two aspects. Besides the sum edge score of the path, we exploit a trigram language model to compute a fluency score of the path. Language models have been commonly used to generate more readable titles. The overall score of a path is compute by: score(p) = edge(p) + λ × flu(p) edge(p) = P Eij∈p ln{w(Eij)} n flu(p) = P i ln{p(wi|wi−2wi−1)} n (5) where p is a candidate path and the corresponding word sequence of p is w1 · · · wn. A trigram language model is trained using SRILM6 on English Gigaword (LDC2011T07). 3.2.3 Beam Search Beam search has been widely used aiming to find the sub optimum result (Collins and Roark, 2004; Zhang and Clark, 2011), when exact inference is extremely difficult. Assuming our word graph has a vertex size of n, the worst computation complexity is O(n4) when using a trigram language model, which is time consuming. 6http://www.speech.sri.com/projects/srilm/ 466 Input: G ←(V, E), LM, B Output: best candidates ←{ {⟨S⟩} } loop do beam ←{ } for each candidate in candidates if candidate endwith ⟨E⟩ ADDTOBEAM(beam, candidate) continue for each Vi in V candidate ←ADDVERTEX(candidate, Vi) COMPUTESCORE(candidate, LM) ADDTOBEAM(beam, candidate) end for end for candidates ←TOP-K(beam, B) if candidates all endwith ⟨E⟩: break end loop best ←BEST(candidates) Figure 5: The beam-search algorithm. Using beam search, assuming the beam size is B, the time complexity decreases to O(Bn2). Pseudo-code of our beam search algorithm is shown in Figure 5. During search, we use candidates to save a fixed size (B) of partial results. For each iteration, we generate a set of new candidates by adding one vertex from the graph, computing their scores, and maintaining the top B candidates for the next iteration. If one candidate reaches the end of the graph, we do not expand it, directly adding it into the new candidate set according to its current score. If all the candidates reach the end, the searching algorithm terminates and the result path is the candidate from candidates with the highest score. 4 Experiment 4.1 Settings We use the standard HG test dataset to evaluate our model, which consists of 500 articles from DUC–04 task 17, where each article is provided with four reference headlines. In particular, we use the first 100 articles from DUC–07 as our development set. There are averaged 40 events per article in the two datasets. All the pre-processing steps, including POS tagging, lemma analysis, dependency parsing and anaphora resolution, are 7http://duc.nist.gov/duc2004/tasks.html conducted using the Stanford NLP tools (Marneffe and Manning, 2008). The MRP iteration number is set to 10. We use ROUGE (Lin, 2004) to automatically measure the model performance, which has been widely used in summarization tasks (Wang et al., 2013; Ng et al., 2014). We focus on Rouge1 and Rouge2 scores, following Xu et al. (2010). In addition, we conduct human evaluations, using the same method as Woodsend et al. (2010). Four participants are asked to rate the generated headlines by three criteria: informativeness (how much important information in the article does the headline describe?), fluency (is it fluent to read?) and coherence (does it capture the topic of article?). Each headline is given a subjective score from 0 to 5, with 0 being the worst and 5 being the best. The first 50 documents from the test set and their corresponding headlines are selected for human rating. We conduct significant tests using t-test. 4.2 Development Results There are three important parameters in the proposed event-driven model, including the beam size B, the fluency weight λ and the number of candidate events N. We find the optimum parameters on development dataset in this section. For efficiency, the three parameters are optimized separately. The best performance is achieved with B = 8, λ = 0.4 and N = 10. We report the model results on the development dataset to study the influences of the three parameters, respectively, with the other two parameters being set with their best value. 4.2.1 Influence of Beam Size We perform experiments with different beam widths. Figure 6 shows the results of the proposed model with beam sizes of 1, 2, 4, 8, 16, 32, 64. As can be seen, our model can achieve the best performances when the beam size is set to 8. Larger beam sizes do not bring better results. 4.2.2 Influence of Fluency Weight The fluency score is used for generating readable titles, while the edge score is used for generating informative titles. The balance between them is important. By default, we set one to the weight of edge score, and find the best weight λ for the fluency score. We set λ ranging from 0 to 1 with and interval of 0.1, to investigate the influence of 467 0.3 0.32 0.34 0.36 0.38 0.4 0.42 10 20 30 40 50 60 0.1 0.11 0.12 0.13 0.14 0.15 0.16 Rouge1 Rouge2 Beam Size Rouge1 Rouge2 Figure 6: Results with different beam sizes. 0.35 0.36 0.37 0.38 0.39 0.4 0.41 0.42 0 0.2 0.4 0.6 0.8 1 0.13 0.135 0.14 0.145 0.15 0.155 0.16 0.165 Rouge1 Rouge2 Fluency Weight Rouge1 Rouge2 Figure 7: Results using different fluency weights. this parameter8. Figure 7 shows the results. The best result is obtained when λ = 0.4. 4.2.3 Influence of Candidate Event Count Ideally, all the sentences of an original text should be considered in multi-sentence compression. But an excess of sentences would bring more noise. We suppose that the number of candidate events N is important as well. To study its influence, we report the model results with different N, from 1 to 15 with an interval of 1. As shown in Figure 8, the performance increases significantly from 1 to 10, and no more gains when N > 10. The performance decreases drastically when M ranges from 12 to 15. 4.3 Final Results Table 1 shows the final results on the test dataset. The performances of the proposed eventdriven model are shown by EventRank. In addition, we use our graph-based MSC model to 8Preliminary results show that λ is better below one. 9The mark ∗denotes the results are inaccurate, which are guessed from the figures in the published paper. 0.32 0.34 0.36 0.38 0.4 0.42 2 4 6 8 10 12 14 0.12 0.13 0.14 0.15 0.16 Rouge1 Rouge2 Number of Candidate Events Rouge1 Rouge2 Figure 8: Results using different numbers of candidate events. Method Model Type Rouge1 Rouge2 Our SalMSC SentRank Extractive 0.3511 0.1375 PhraseRank Abstractive 0.3706 0.1415 EventRank Event-driven 0.4247‡ 0.1484‡ Using MSC SentRank Extractive 0.2773 0.0980 PhraseRank Abstractive 0.3652 0.1299 EventRank Event-driven 0.3822‡ 0.1380‡ Other work SentRank+SSC Extractive 0.2752 0.0855 Topiary Abstractive 0.2835 0.0872 Woodsend Abstractive 0.26∗ 0.06∗9 Table 1: Performance comparison for automatic evaluation. The mark ‡ denotes that the result is significantly better with a p-value below 0.01. generate titles for SentRank and PhraseRank, respectively, as mentioned in Section 3.2.1. By comparison with the two models, we can examine the effectiveness of the event-driven model. As shown in Table 1, the event-driven model achieves the best scores on both Rouge1 and Rouge2, demonstrating events are more effective than sentences and phrases. Further, we compare our proposed MSC method with the MSC proposed by Filippova (2010), to study the effectiveness of our novel MSC. We use MSC10 and SalMSC11 to 10The MSC source code, published by Boudin and Morin (2013), is available at https://github.com/boudinfl/takahe. 11Our source code is available at https://github.com/ dram218/WordGraphCompression. 468 Method Info. Infu. Cohe. SentRank 4.13 2.85 2.54 PhraseRank 4.21 3.25 2.62 EventRank 4.35‡ 3.41‡ 3.22‡ Table 2: Results from the manual evaluation. The mark ‡ denotes the result is significantly better with a p-value below 0.01. SentRank, PhraseRank and EventRank to denote their MSC method and our proposed MSC, respectively, applying them, respectively. As shown in Table 1, better performance is achieved by our MSC, demonstrating the effectiveness of our proposed MSC. Similarly, the event-driven model can achieve the best results. We report results of previous state-of-the-art systems as well. SentRank+SSC denotes the result of Erkan and Radev (2004), which uses our SentRank and SSC to obtain the final title. Topiary denotes the result of Zajic et al. (2005), which is an early abstractive model. Woodsend denotes the result of Woodsend et al. (2010), which is an abstractive model using a quasisynchronous grammar to generate a title. As shown in Table 1, MSC is significantly better than SSC, and our event-driven model achieves the best performance, compared with state-of-the-art systems. Following Alfonseca et al. (2013), we conduct human evaluation also. The results are shown in Table 2, by three aspects: informativeness, fluency and coherence. The overall tendency is similar to the results, and the event-driven model achieves the best results. 4.4 Example Outputs We show several representative examples of the proposed event-driven model, in comparison with the extractive and abstractive models. The examples are shown in Table 3. In the first example, the results of both SentRank and PhraseRank contain the redundant phrase “catastrophe Tuesday”. The output of PhraseRank is less fluent compared with that of SentRank. The preposition “for” is not recovered by the headline generation system PhraseRank. In contrast, the output of EventRank is better, capturing the major event in the reference title. Method Generated Headlines Reference Honduras, other Caribbean countries brace for the wrath of Hurricane Mitch SentRank Honduras braced for potential catastrophe Tuesday as Hurricane Mitch roared through northwest Caribbean PhraseRank Honduras braced catastrophe Tuesday Hurricane Mitch roared northwest Caribbean EventRank Honduras braced for Hurricane Mitch roared through northwest Caribbean Reference At Ibero-American summit Castro protests arrest of Pinochet in London SentRank Castro disagreed with the arrest Augusto Pinochet calling international meddling PhraseRank Cuban President Fidel Castro disagreed arrest London Chilean dictator Augusto Pinochet EventRank Fidel Castro disagreed with arrest in London of Chilean dictator Augusto Pinochet Reference Cambodian leader Hun Sen rejects opposition demands for talks in Beijing SentRank Hun Sen accusing opposition parties of internationalize the political crisis PhraseRank opposition parties demands talks internationalize political crisis EventRank Cambodian leader Hun Sen rejected opposition parties demands for talks Table 3: Comparison of headlines generated by the different methods. In the second example, the outputs of three systems all lose the phrase “Ibero-American summit”. SentRank gives different additional information compared with PhraseRank and EventRank. Overall, the three outputs can be regarded as comparable. PhraseRank also has a fluency problem by ignoring some function words. In the third example, SentRank does not capture the information on “demands for talks”. PhraseRank discards the preposition word “for”. The output of EventRank is better, being both more fluent and more informative. From the three examples, we can see that SentRank tends to generate more readable titles, but may lose some important information. PhraseRank tends to generate a title with more important words, but the fluency is relatively weak even with MSC. EventRank combines the advantages of both SentRank and PhraseRank, generating titles that contain more important events with complete structures. The observation verifies our hypothesis in the introduction — that extractive models have the problem of low information coverage, and 469 abstractive models have the problem of poor grammaticality. The event-driven mothod can alleviate both issues since event offer a trade-off between sentence and phrase. 5 Related Work Our event-driven model is different from traditional extractive (Dorr et al., 2003; Erkan and Radev, 2004; Alfonseca et al., 2013) and abstractive models (Zajic et al., 2005; Soricut and Marcu, 2007; Woodsend et al., 2010; Xu et al., 2010) in that events are used as the basic processing units instead of sentences and phrases. As mentioned above, events are a trade-off between sentences and phrases, avoiding sparsity and structureless problems. In particular, our event-driven model can interact with sentences and phrases, thus is a light combination for two traditional models. The event-driven model is mainly inspired by Alfonseca et al. (2013), who exploit events for multi-document headline generation. They leverage titles of sub-documents for supervised training. In contrast, we generate a title for a single document using an unsupervised model. We use novel approaches for event ranking and title generation. In recent years, sentence compression (Galanis and Androutsopoulos, 2010; Yoshikawa and Iida, 2012; Wang et al., 2013; Li et al., 2014; Thadani, 2014) has received much attention. Some methods can be directly applied for multidocument summarization (Wang et al., 2013; Li et al., 2014). To our knowledge, few studies have been explored on applying them in headline generation. Multi-sentence compression based on word graph was first proposed by Filippova (2010). Some subsequent work was presented recently. Boudin and Morin (2013) propose that the key phrase is helpful to sentence generation. The key phrases are extracted according to syntactic pattern and introduced to identify shortest path in their work. Mehdad et al. (2013; Mehdad et al. (2014) introduce the MSC based on word graph into meeting summarization. Tzouridis et al. (2014) cast multi-sentence compression as a structured predication problem. They use a largemargin approach to adapt parameterised edge weights to the data in order to acquire the shortest path. In their work, the sentences introduced to a word graph are treated equally, and the edges in the graph are constructed according to the adjacent order in original sentence. Our MSC model is also inspired by Filippova (2010). Our approach is more aggressive than their approach, generating compressions with arbitrary length by using a different edge construction strategy. In addition, our search algorithm is also different from theirs. Our graph-based MSC model is also similar in spirit to sentence fusion, which has been used for multi-document summarization (Barzilay and McKeown, 2005; Elsner and Santhanam, 2011). 6 Conclusion and Future Work We proposed an event-driven model headline generation, introducing a graph-based MSC model to generate the final title, based on a set of events. Our event-driven model can incorporate sentence and phrase salience, which has been used in extractive and abstractive HG models. The proposed graph-based MSC model is not limited to our event-driven model. It can be applied on extractive and abstractive models as well. Experimental results on DUC–04 demonstrate that event-driven model can achieve better results than extractive and abstractive models, and the proposed graph-based MSC model can bring improved performances compared with previous MSC techniques. Our final event-driven model obtains the best result on this dataset. For future work, we plan to explore two directions. Firstly, we plan to introduce event relations to learning event salience. In addition, we plan to investigate other methods about multisentence compression and sentence fusion, such as supervised methods. Acknowledgments We thank all reviewers for their detailed comments. This work is supported by the State Key Program of National Natural Science Foundation of China (Grant No.61133012), the National Natural Science Foundation of China (Grant No.61373108, 61373056), the National Philosophy Social Science Major Bidding Project of China (Grant No.11&ZD189), and the Key Program of Natural Science Foundation of Hubei, China (Grant No.2012FFA088). The corresponding authors of this paper are Meishan Zhang and Donghong Ji. 470 References Enrique Alfonseca, Daniele Pighin and Guillermo Garrido. 2013. HEADY: News headline abstraction through event pattern clustering. In Proceedings of ACL 2013,pages 1243–1253. Regina Barzilay and Michael Elhadad. 1997. Using Lexical Chains for Text Summarization. In Proceedings of the Intelligent Scalable Text Summarization Workshop(ISTS’97), Madrid. Regina Barzilay and Kathleen R. McKeown. 2005. Sentence fusion for multidocument news summarization. Computational Linguistics, 31(3), pages 297–328. Florian Boudin and Emmanuel Morin. 2013. Keyphrase Extraction for N-best Reranking in Multi-Sentence Compression. In Proccedings of the NAACL HLT 2013 conference, page 298–305. James Clarke and Mirella Lapata. 2010. Discourse Constraints for Document Compression. Computational Linguistics, 36(3), pages 411– 441. Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceedings of ACL 2004, pages 111-118. Corston-Oliver, Simon. 2001. Text compaction for display on very small screens. In Proceedings of the NAACL Workshop on Automatic Summarization, Pittsburg, PA, 3 June 2001, pages 89–98. Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan. 2014. Using Structured Events to Predict Stock Price Movement : An Empirical Investigation. In Proceedings of EMNLP 2014, pages 1415–1425. Bonnie Dorr, David Zajic, and Richard Schwartz. 2003. Hedge trimmer: A parse-and-trim approach to headline generation. In proceedings of the HLT–NAACL 03 on Text summarization workshop, volume 5, pages 1–8. Micha Elsner and Deepak Santhanam. 2011. Learning to fuse disparate sentences. In Proceedings of ACL 2011, pages 54–63. Nicolai Erbs, Iryna Gurevych and Torsten Zesch. 2013. Hierarchy Identification for Automatically Generating Table-of-Contents. In Proceedings of Recent Advances in Natural Language Processing, Hissar, Bulgaria, pages 252–260. Gunes Erkan and Dragomir R Radev. 2004. LexRank : Graph-based Lexical Centrality as Salience in Text Summarization. Journal of Artificial Intelligence Research 22, 2004, pages 457–479. Fader A, Soderland S, Etzioni O. 2011. Identifying relations for open information extraction. In Proceedings of EMNLP 2011, pages 1535–1545. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of Coling 2010, pages 322–330. Dimitrios Galanis and Ion Androutsopoulos. 2010. An extractive supervised two-stage method for sentence compression. In Proceedings of NAACL 2010, pages 885–893. Barbara J. Grosz and Scott Weinstein and Aravind K. Joshi. 1995. Centering: A framework for modeling the local coherence of discourse. Computational Linguistics, volume 21, pages 203–225. Zhichao Hu, Elahe Rahimtoroghi, Larissa Munishkina, Reid Swanson and Marilyn A.Walker. 2013. Unsupervised Induction of Contingent Event Pairs from Film Scenes. In Proceedings of EMNLP 2013, pages 369–379. Chen Li,Yang Liu, Fei Liu, Lin Zhao, Fuliang Weng. 2014. Improving Multi-documents Summarization by Sentence Compression based on Expanded Constituent Parse Trees. In Proceedings of EMNLP 2014, pages 691–701. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branckes Out: Proceedings of the ACL–04 Workshop, pages 74–81. Andre F.T. Martins and Noah A. Smith. 2009. Summarization with a joint model for sentence extraction and compression. In Proceedings of the Workshop on Integer Linear Programming for Natural Language Processing, pages 1–9. Yashar Mehdad, Giuseppe Carenini, Frank W.Tompa and Raymond T.Ng. 2013. Abstractive Meeting Summarization with Entailment and Fusion. In Proceedings of the 14th European Workshop on Natural Language Generation, pages 136–146. Yashar Mehdad, Giuseppe Carenini and Raymond T.Ng. 2014. Abstractive Summarization of Spoken and Written Conversations Based on Phrasal Queries. In Proceedings of ACL 2014, pages 1220– 1230. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1), pages 21–48. Marie-Catherine de Marneffe and Christopher D. Manning. 2008. The stanford typed dependencies representation. In COLING 2008 Workshop on Cross-framework and Cross-domain Parser Evaluation. Jun-Ping Ng, Yan Chen, Min-Yen Kan, Zhoujun Li. 2014. Exploiting Timelines to Enhance Multidocument Summarization. Proceedings of ACL 2014, pages 923–933. 471 Likun Qiu and Yue Zhang. 2014. ZORE: A Syntaxbased System for Chinese Open Relation Extraction. Proceedings of EMNLP 2014, pages 1870–1880. Robert G. Sargent. 1988. Polynomial Time Joint Structural Inference for Sentence Compression. Management Science, 34(10), pages 1231–1251. Schwartz R. 1988. Unsupervised topic discovery. In Proceedings of workshop on language modeling and information retrieval, pages 72–77. R. Soricut, and D. Marcu. 2007. Abstractive headline generation using WIDL-expressions. Information Processing and Management, 43(6), pages 1536– 1548. Kapil Thadani. 2014. Approximation Strategies for Multi-Structure Sentence Compression. Proceedings of ACL 2014, pages 1241–1251. Emmanouil Tzouridis, Jamal Abdul Nasir and Ulf Brefeld. 2014. Learning to Summarise Related Sentences. Proceedings of COLING 2014,Dublin, Ireland, August 23-29 2014. pages 1636–1647. Carles Ventura, Xavier Giro-i-Nieto, Veronica Vilaplana, Daniel Giribet, and Eusebio Carasusan. 2013. Automatic keyframe selection based on Mutual Reinforcement Algorithm. In Proceedings of 11th international workshop on content-based multimedia indexing(CBMI), pages 29–34. Stephen Wan and Kathleen McKeown. 2004. Generating overview summaries of ongoing email thread discussions. In Proceedings of COLING 2004, Geneva, Switzerland, 2004, pages 1384–1394. Lu Wang, Hema Raghavan, Vittorio Castelli, Radu Florian, Claire Cardie. 2013. A sentence compression based framework to query-focused mutli-document summarization. In Proceedings of ACL 2013, Sofia, Bulgaria, August 4-9 2013, pages 1384–1394. Kristian Woodsend, Yansong Feng and Mirella Lapata. 2010. Title generation with quasi-synchronous grammar. In Proceedings of EMNLP 2010, pages 513–523. Songhua Xu, Shaohui Yang and Francis C.M. Lau. 2010. Keyword extraction and headline generation using novel work features. In Proceedings of AAAI 2010, pages 1461–1466. Katsumasa Yoshikawa and Ryu Iida. 2012. Sentence Compression with Semantic Role Constraints. In Proceedings of ACL 2012, pages 349–353. David Zajic, Bonnie Dorr and Richard Schwartz. 2005. Headline generation for written and broadcast news. lamp-tr-120, cs-tr-4698. Hongyuan Zha. 2002. Generic summarization and keyphrase extraction using mutual reinforement principle and sentence clustering. In Proceedings of SIGIR 2002, pages 113–120. Qi Zhang, Xipeng Qiu, Xuanjing Huang, Wu Lide. 2008. Learning semantic lexicons using graph mutual reinforcement based bootstrapping. Acta Automatica Sinica, 34(10), pages 1257–1261. Yue Zhang, Stephen Clark. 2011. Syntactic Processing Using the Generalized Perceptron and Beam Search. Computational Linguistics, 37(1), pages 105–150. Yue Zhang. 2013. Partial-Tree Linearization: Generalized Word Ordering for Text Synthesis. In Proceedings of IJCAI 2013, pages 2232–2238. 472
2015
45
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 473–482, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics New Transfer Learning Techniques for Disparate Label Sets Young-Bum Kim† Karl Stratos‡ Ruhi Sarikaya† Minwoo Jeong† †Microsoft Corporation, Redmond, WA ‡Columbia University, New York, NY {ybkim, ruhi.sarikaya, minwoo.jeong}@microsoft.com [email protected] Abstract In natural language understanding (NLU), a user utterance can be labeled differently depending on the domain or application (e.g., weather vs. calendar). Standard domain adaptation techniques are not directly applicable to take advantage of the existing annotations because they assume that the label set is invariant. We propose a solution based on label embeddings induced from canonical correlation analysis (CCA) that reduces the problem to a standard domain adaptation task and allows use of a number of transfer learning techniques. We also introduce a new transfer learning technique based on pretraining of hidden-unit CRFs (HUCRFs). We perform extensive experiments on slot tagging on eight personal digital assistant domains and demonstrate that the proposed methods are superior to strong baselines. 1 Introduction The main goal of NLU is to automatically extract the meaning of spoken or typed queries. In recent years, this task has become increasingly important as more and more speech-based applications have emerged. Recent releases of personal digital assistants such as Siri, Google Now, Dragon Go and Cortana in smart phones provide natural language based interface for a variety of domains (e.g. places, weather, communications, reminders). The NLU in these domains are based on statistical machine learned models which require annotated training data. Typically each domain has its own schema to annotate the words and queries. However the meaning of words and utterances could be different in each domain. For example, “sunny” is considered a weather condition in the weather domain but it may be a song title in a music domain. Thus every time a new application is developed or a new domain is built, a significant amount of resources is invested in creating annotations specific to that application or domain. One might attempt to apply existing techniques (Blitzer et al., 2006; Daum´e III, 2007) in domain adaption to this problem, but a straightforward application is not possible because these techniques assume that the label set is invariant. In this work, we provide a simple and effective solution to this problem by abstracting the label types using the canonical correlation analysis (CCA) by Hotelling (Hotelling, 1936) a powerful and flexible statistical technique for dimensionality reduction. We derive a low dimensional representation for each label type that is maximally correlated to the average context of that label via CCA. These shared label representations, or label embeddings, allow us to map label types across different domains and reduce the setting to a standard domain adaptation problem. After the mapping, we can apply the standard transfer learning techniques to solve the problem. Additionally, we introduce a novel pretraining technique for hidden-unit CRFs (HUCRFs) to effectively transfer knowledge from one domain to another. In our experiments, we find that our pretraining method is almost always superior to strong baselines such as the popular domain adaptation method of Daum´e III (2007). 2 Problem description and related work Let D be the number of distinct domains. Let Xi be the space of observed samples for the i-th domain. Let Yi be the space of possible labels for the i-th domain. In most previous works in domain adaptation (Blitzer et al., 2006; Daum´e III, 2007), observed data samples may vary but label space is 473 invariant1. That is, Yi = Yj ∀i, j ∈{1 . . . D} but Xi ̸= Xj for some domains i and j. For example, in part-of-speech (POS) tagging on newswire and biomedical domains, the observed data sample may be radically different but the POS tag set remains the same. In practice, there are cases, where the same query is labeled differently depending on the domain or application and the context. For example, Fred Myer can be tagged differently; “send a text message to Fred Myer” and “get me driving direction to Fred Myer ”. In the first case, Fred Myer is person in user’s contact list but it is a grocery store in the second one. So, we relax the constraint that label spaces must be the same. Instead, we assume that surface forms (i.e words) are similar. This is a natural setting in developing multiple applications on speech utterances; input spaces (service request utterances) do not change drastically but output spaces (slot tags) might. Multi-task learning differs from our task. In general multi-task learning aims to improve performance across all domains while our domain adaptation objective is to optimize the performance of semantic slot tagger on the target domain. Below, we review related work in domain adaption and natural language understanding (NLU). 2.1 Related Work Domain adaptation has been widely used in many natural language processing (NLP) applications including part-of-speech tagging (Schnabel and Sch¨utze, 2014), parsing (McClosky et al., 2010), and machine translation (Foster et al., 2010). Most of the work can be classified either supervised domain adaptation (Chelba and Acero, 2006; Blitzer et al., 2006; Daume III and Marcu, 2006; Daum´e III, 2007; Finkel and Manning, 2009; Chen et al., 2011) or semi-supervised adaptation (Ando and Zhang, 2005; Jiang and Zhai, 2007; Kumar et al., 2010; Huang and Yates, 2010). Our problem setting falls into the former. Multi-task learning has become popular in NLP. Sutton and McCallum (2005) showed that joint 1Multilingual learning (Kim et al., 2011; Kim and Snyder, 2012; Kim and Snyder, 2013) has same setting. learning and/or decoding of sub-tasks helps to improve performance. Collobert and Weston (2008) proved the similar claim in a deep learning architecture. While our problem resembles their settings, there are two clear distinctions. First, we aim to optimize performance on the target domain by minimizing the gap between source and target domain while multi-task learning jointly learns the shared tasks. Second, in our problem the domains are different, but they are closely related. On the other hand, prior work focuses on multiple subtasks of the same data. Despite the increasing interest in NLU (De Mori et al., 2008; Xu and Sarikaya, 2013; Sarikaya et al., 2014; Xu and Sarikaya, 2014; Anastasakos et al., 2014; El-Kahky et al., 2014; Liu and Sarikaya, 2014; Marin et al., 2014; Celikyilmaz et al., 2015; Ma et al., 2015; Kim et al., 2015), transfer learning in the context of NLU has not been much explored. The most relevant previous work is Tur (2006) and Li et al. (2011), which described both the effectiveness of multi-task learning in the context of NLU. For multi-task learning, they used shared slots by associating each slot type with aggregate active feature weight vector based on an existing domain specific slot tagger. Our empirical results shows that these vector representation might be helpful to find shared slots across domain, but cannot find bijective mapping between domains. Also, Jeong and Lee (2009) presented a transfer learning approach in multi-domain NLU, where the model jointly learns slot taggers in multiple domains and simultaneously predicts domain detection and slot tagging results.2 To share parameters across domains, they added an additional node for domain prediction on top of the slot sequence. However, this framework also limited to a setting in which the label set remains invariant. In contrast, our method is restricted to this setting without any modification of models. 3 Sequence Modeling Technique The proposed techniques in Section 4 and 5 are generic methodologies and not tied to any particular models such as any sequence models and instanced based models. However, because of superior performance over CRF, we use a hidden unit CRF (HUCRF) of Maaten et al. (2011). 2Jeong and Lee (2009) pointed out that if the domain is given, their method is the same as that of Daum´e III (2007). 474 Figure 1: Graphical representation of hidden unit CRFs. While popular and effective, a CRF is still a linear model. In contrast, a HUCRF benefits from nonlinearity, leading to superior performance over CRF (Maaten et al., 2011). Thus we will focus on HUCRFs to demonstrate our techniques in experiments. 3.1 Hidden Unit CRF (HUCRF) A HUCRF introduces a layer of binary-valued hidden units z = z1 . . . zn ∈{0, 1} for each pair of label sequence y = y1 . . . yn and observation sequence x = x1 . . . xn. A HUCRF parametrized by θ ∈Rd and γ ∈Rd′ defines a joint probability of y and z conditioned on x as follows: pθ,γ(y, z|x) = exp(θ⊤Φ(x, z) + γ⊤Ψ(z, y)) P z′∈{0,1}n y′∈Y(x,z′) exp(θ⊤Φ(x, z′) + γ⊤Ψ(z′, y′)) (1) where Y(x, z) is the set of all possible label sequences for x and z, and Φ(x, z) ∈Rd and Ψ(z, y) ∈Rd′ are global feature functions that decompose into local feature functions: Φ(x, z) = n X j=1 φ(x, j, zj) Ψ(z, y) = n X j=1 ψ(zj, yj−1, yj) HUCRF forces the interaction between the observations and the labels at each position j to go through a latent variable zj: see Figure 1 for illustration. Then the probability of labels y is given by marginalizing over the hidden units, pθ,γ(y|x) = X z∈{0,1}n pθ,γ(y, z|x) As in restricted Boltzmann machines (Larochelle and Bengio, 2008), hidden units are conditionally independent given observations and labels. This allows for efficient inference with HUCRFs despite their richness (see Maaten et al. (2011) for details). We use a perceptron-style algorithm of Maaten et al. (2011) for training HUCRFs. 4 Transfer learning between domains with different label sets In this section, we describe three methods for utilizing annotations in domains with different label types. First two methods are about transferring features and last method is about transferring model parameters. Each of these methods requires some sort of mapping for label types. A fine-grained label type needs to be mapped to a coarse one; a label type in one domain needs to be mapped to the corresponding label type in another domain. We will provide a solution to obtaining these label mappings automatically in Section 5. 4.1 Coarse-to-fine prediction This approach has some similarities to the method of Li et al. (2011) in that shared slots are used to transfer information between domains. In this two-stage approach, we train a model on the source domain, make predictions on the target domain, and then use the predicted labels as additional features to train a final model on the target domain. This can be helpful if there is some correlation between the label types in the source domain and the label types in the target domain. However, it is not desirable to directly use the label types in the source domain since they can be highly specific to that particular domain. An effective way to combat this problem is to reduce the original label types such start-time, contract-info, and restaurant as to a set of coarse label types such as name, date, time, and location that are universally shared across all domains. By doing so, we can use the first model to predict generic labels such as time and then use the second model to use this information to predict fine-grained labels such as start-time and end-time. 4.2 Method of Daum´e III (2007) In this popular technique for domain adaptation, we train a model on the union of the source domain data and the target domain data 475 but with the following preprocessing step: each feature is duplicated and the copy is conjoined with a domain indicator. For example, in a WEATHER domain dataset, a feature that indicates the identity of the string “Sunny” will generate both w(0) = Sunny and (w(0) = Sunny) ∧(domain = WEATHER) as feature types. This preprocessing allows the model to utilize all data through the common features and at the same time specialize to specific domains through the domain specific features. This is especially helpful when there is label ambiguity on particular features (e.g., “Sunny” might be a weather-condition in a WEATHER domain dataset but a music-song-name in a MUSIC domain dataset). Note that a straightforward application of this technique is in general not feasible in our situation. This is because we have features conjoined with label types and our domains do not share label types. This breaks the sharing of features across domains: many feature types in the source domain are disjoint from those in the target domain due to different labeling. Thus it is necessary to first map source domain label types to target domain label type. After the mapping, features are shared across domains and we can apply this technique. 4.3 Transferring model parameter In this approach, we train HUCRF on the source domain and transfer the learned parameters to initialize the training process on the target domain. This can be helpful for at least two reasons: 1. The resulting model will have parameters for feature types observed in the source domain as well as the target domain. Thus it has better feature coverage. 2. If the training objective is non-convex, this initialization can be helpful in avoiding bad local optima. Since the training objective of HUCRFs is nonconvex, both benefits can apply. We show in our experiments that this is indeed the case: the model benefits from both better feature coverage and better initialization. Note that in order to use this approach, we need to map source domain label types to target domain label type so that we know which parameter in Figure 2: Illustration of a pretraining scheme for HUCRFs. the source domain corresponds to which parameter in the target domain. This can be a many-toone, one-to-many, one-to-one mapping depending on the label sets. 4.3.1 Pretraining with HUCRFs In fact, pretraining HUCRFs in the source domain can be done in various ways. Recall that there are two parameter types: θ ∈Rd for scoring observations and hidden states and γ ∈Rd′ for scoring hidden states and labels (Eq. (1)). In pretraining, we first train a model (θ1, γ1) on the source data {(x(i) src, y(i) src)}nsrc i=1 : (θ1, γ1) ≈arg max θ,γ nsrc X i=1 log pθ,γ(y(i) src|x(i) src) Then we train a model (θ2, γ2) on the target data {(x(i) trg, y(i) trg)}ntrg i=1 by initializing (θ2, γ2) ← (θ1, γ1): (θ2, γ2) ≈arg max θ,γ ntrg X i=1 log pθ,γ(y(i) trg|x(i) trg) Here, we can choose to initialize only θ2 ←θ1 and discard the parameters for hidden states and labels since they may not be the same. The θ1 parameters model the hidden structures in the source domain data and serve as a good initialization point for learning the θ2 parameters in the target domain. This can be helpful if the mapping between the label types in the source data and the label types in the target data is unreliable. This process is illustrated in Figure 2. 5 Automatic generation of label mappings All methods described in Section 4 require a way to propagate the information in label types across different domains. A straightforward solution would be to manually construct 476 such mappings by inspection. For instance, we can specify that start-time and end-time are grouped as the same label time, or that the label public-transportation-route in the PLACES domain maps to the label implicit-location in the CALENDAR domain. Instead, we propose a technique that automatically generates the label mappings. We induce vector representations for all label types through canonical correlation analysis (CCA) — a powerful and flexible technique for deriving lowdimensional representation. We give a review of CCA in Section 5.1 and describe how we use the technique to construct label mappings in Section 5.2. 5.1 Canonical Correlation Analysis (CCA) CCA is a general technique that operates on a pair of multi-dimensional variables. CCA finds k dimensions (k is a parameter to be specified) in which these variables are maximally correlated. Let x1 . . . xn ∈Rd and y1 . . . yn ∈Rd′ be n samples of the two variables. For simplicity, assume that these variables have zero mean. Then CCA computes the following for i = 1 . . . k: arg max ui∈Rd, vi∈Rd′: u⊤ i ui′=0 ∀i′<i v⊤ i vi′=0 ∀i′<i Pn l=1(u⊤ i xl)(v⊤ i yl) qPn l=1(u⊤ i xl)2 qPn l=1(v⊤ i yl)2 In other words, each (ui, vi) is a pair of projection vectors such that the correlation between the projected variables u⊤ i xl and v⊤ i yl (now scalars) is maximized, under the constraint that this projection is uncorrelated with the previous i −1 projections. This is a non-convex problem due to the interaction between ui and vi. Fortunately, a method based on singular value decomposition (SVD) provides an efficient and exact solution to this problem (Hotelling, 1936). The resulting solution u1 . . . uk ∈Rd and v1 . . . vk ∈Rd′ can be used to project the variables from the original d- and d′-dimensional spaces to a k-dimensional space: x ∈Rd −→¯x ∈Rk : ¯xi = u⊤ i x y ∈Rd′ −→¯y ∈Rk : ¯yi = v⊤ i y The new k-dimensional representation of each variable now contains information about the other variable. The value of k is usually selected to be much smaller than d or d′, so the representation is typically also low-dimensional. 5.2 Inducing label embeddings We now describe how to use CCA to induce vector representations for label types. Using the same notation, let n be the number of instances of labels in the entire data. Let x1 . . . xn be the original representations of the label samples and y1 . . . yn be the original representations of the associated words set contained in the labels. We employ the following definition for the original representations for reasons we explain below. Let d be the number of distinct label types and d′ be the number of distinct word types. • xl ∈Rd is a zero vector in which the entry corresponding to the label type of the l-th instance is set to 1. • yl ∈Rd′ is a zero vector in which the entries corresponding to words spanned by the label are set to 1. The motivation for this definition is that similar label types often have similar or same word. For instance, consider two label types start-time, (start time of a calendar event) and end-time, meaning (the end time of a calendar event). Each type is frequently associated with phrases about time. The phrases {“9 pm”, “7”, “8 am”} might be labeled as start-time; the phrases {“9 am”, “7 pm”} might be labeled as end-time. In these examples, both label types share words “am”, “pm”, “9”, and “7” even though phrases may not match exactly. Figure 3 gives the CCA algorithm for inducing label embeddings. It produces a k-dimensional vector for each label type corresponding to the CCA projection of the one-hot encoding of that label. 5.3 Discussion on alternative label representations We point out that there are other options for inducing label representations besides CCA. For instance, one could simply use the sparse feature vector representation of each label. However, CCA’s low-dimensional projection is computationally more convenient and arguably more generalizable. One can also consider training a predictive model similar to word2vec (Mikolov 477 Figure 4: Bijective mapping: labels in REMINDER domain (orange box) are mapped into those in PLACES and ALARM domains. CCA-LABEL Input: labeled sequences {(x(i), y(i))}n i=1, dimension k Output: label vector v ∈Rk for each label type 1. For each label type l ∈{1 . . . d} and word type w ∈ {1 . . . d} present in the sequences, calculate • count(l) = number of times label l occurs • count(w) = number of times word w occurs • count(l, w) = number of times word w occurs under label l 2. Define a matrix Ω∈Rd×d′ where: Ωl,w = count(l, w) p count(l)count(w) 3. Perform rank-k SVD on Ω. Let U ∈Rd×k be a matrix where the i-th column is the left singular vector of Ω corresponding to the i-th largest singular value. 4. For each label l, set the l-th normalized row of U to be its vector representation. Figure 3: CCA algorithm for inducing label embeddings. et al., 2013). But this requires significant efforts in implementation and also very long training time. In contrast, CCA is simple, efficient, and effective and can be readily implemented. Also, CCA is theoretically well understood while methods inspired by neural networks are not. 5.4 Constructing label mappings Vector representations of label types allow for natural solutions to the task of constructing label mappings. 5.4.1 Mapping to a coarse label set Given a domain and the label types that occur in the domain, we can reduce the number of label types by simply clustering their vector representations. For instance, if the embeddings for start-time and end-time are close together, they will be grouped as a single label type. We run the k-means algorithm on the label embeddings to obtain this coarse label set. Table 1 shows examples of this clustering. It demonstrates that the CCA representations obtained by the procedure described in Section 5.2 are indeed informative of the labels’ properties. Cluster Labels Cluster Labels Time start time Person contact info end time artist original start time from contact name travel time relationship name Loc absolute loc Loc ATTR prefer route leaving loc public trans route from loc nearby position ref distance Table 1: Some of cluster examples 5.4.2 Bijective mapping between label sets Given a pair of domains and their label sets, we can create a bijective label mapping by finding the nearest neighbor of each label type. Figure 4 shows some actual examples of CCA-based bijective maps, where the label set in the REMINDER domain is mapped to the PLACES and ALARM domains. One particularly interesting example is that move earlier time in REMINDER domain is mapped to Travel time in PLACES and Duration in ALARM domain. This is a tag used in a user utterance requesting to move an 478 Domains # of label Source Training Test Description Alarm 7 27865 3334 Set alarms Calendar 20 50255 7017 Set appointments & meetings in the calendar Communication 18 104881 14484 Make calls, send texts, and communication related user request Note 4 17445 2342 Note taking Ondevice 7 60847 9704 Phone settings Places 32 150348 20798 Find places & get direction Reminder 16 62664 8235 Setting time, person & place based reminder Weather 9 53096 9114 Weather forecasts & historical information about weather patterns Table 2: Size of number of label, labeled data set size and description for Alarm, Calendar, Communication, Note, Ondevice, Places, Reminder and Weather domains partitioned into training and test set. appointment to an earlier time. For example, in the query “move the dentist’s appointment up by 30 minutes.”, the phrase “30 minutes” is tagged with move earlier time. The role of this tag is very similar to the role of Travel time in PLACES (not Time) and Duration in ALARMS (not Start date), and CCA is able to recover this relation. 6 Experiments In this section, we turn to experimental findings to provide empirical support for our proposed methods. 6.1 Setup To test the effectiveness of our approach, we apply it to a suite of eight Cortana personal assistant domains for slot sequence tagging tasks, where the goal is to find the correct semantic tagging of the words in a given user utterance. The data statistics and short descriptions are shown in Table 2. As the table indicates, the domains have very different granularity and diverse semantics. 6.2 Baselines In all our experiments, we trained HUCRF and only used n-gram features, including unigram, bigram, and trigram within a window of five words (±2 words) around the current word as binary feature functions. With these features, we compare the following methods for slot tagging: • NoAdapt: train only on target training data. • Union: train on the union of source and target training data. • Daume: train with the feature duplication method described in 4.2. • C2F: train with the coarse-to-fine prediction method described in 4.1. • Pretrain: train with the pretraining method described in 4.3.1. To apply these methods except for Target, we treat each of the eight domains in turn as the test domain, with one of remaining seven domain as the source domain. As in general domain adaptation setting, we assume that the source domain has a sufficient amount of labeled data but the target domain has an insufficient amount of labeled data. Specifically, For each test or target domain, we only use 10% of the training examples to simulate data scarcity. In the following experiments, we report the slot F-measure, using the standard CoNLL evaluation script 3 6.3 Results on mappings Mapping technique Adaptation technique Manual Li et al. (2011) CCA Union 68.16 64.7 70.51 Daume 73.42 67.32 75.85 C2F 75.47 75.69 76.29 Pretrain 77.72 76.99 78.76 NoAdapt 75.13 Table 3: Comparison of slot F1 scores using the proposed CCA-derived mapping versus other mapping methods combined with different adaptation techniques. To assess the quality of our automatic mapping methods via CCA described in Section 5, we compared against manually established mappings and also the mapping method of Li et al. (2011). The method of Li et al. (2011) is to associate each slot type with the aggregate active feature weight vectors based on an existing domain specific slot tagger (a CRF). Manual mapping were performed 3http://www.cnts.ua.ac.be/conll2000/chunking/output.html 479 Target Source Minimum distance domain performance Domain Nearest Domain NoAdapt Union Daume C2F Pretrain Alarm Calendar 74.82 84.46 84.97 81.54 84.88 Calendar Reminder 70.51 73.94 73.07 72.82 77.08 Note Reminder 65.38 56.39 69.89 66.6 69.55 Ondevice Weather 70.86 66.66 71.17 71.49 73.5 Reminder Calendar 77.3 83.38 82.19 81.29 83.22 Communication Reminder 79.31 74.28 80.33 79.66 82.96 Places Weather 73.93 73.74 75.86 73.73 80.11 Weather Places 92.78 92.88 94.43 93.75 97.18 Average 75.61 75.72 78.99 77.61 81.06 Table 4: Slot F1 scores on each target domain using adapted models from the nearest source domain. hhhhhhhhhhhh Source Target Alarm Calendar Note Ondevice Reminder Communication Places Weather Average NoAdapt 74.82 70.51 65.38 70.86 77.3 79.31 73.93 92.78 75.61 Alarm Union 72.26 59.92 67.32 79.45 77.91 73.78 92.67 74.76 Daume 72.77 66.28 70.94 81.12 80.38 75.62 93.12 77.18 C2F 70.59 64.06 71 78.8 79.5 74.29 92.75 75.86 Pretrain 76.68 68.12 71.8 81.25 81.5 77.1 95.03 78.78 Calendar Union 84.46 50.64 64.7 83.38 75.02 71.13 93.2 74.65 Daume 84.97 65.43 70.12 82.19 79.78 75.21 93.1 78.69 C2F 81.54 66.08 71.22 81.29 80.11 73.75 93.18 78.17 Pretrain 84.88 69.21 72.3 83.22 82.75 77.89 95.8 80.86 Note Union 60.26 60.42 65.79 69.81 76.85 70.56 90.02 70.53 Daume 66.03 67.38 69.54 76.65 77.83 73.49 92.09 74.72 C2F 74.68 70.51 71.34 77.49 79.48 74.17 92.89 77.22 Pretrain 75.52 72.4 71.4 80.1 82.06 76.53 94.22 78.89 Ondevice Union 63.72 66.28 55.67 75.16 74.85 70.59 90.7 71.00 Daume 71.01 69.39 64.02 75.75 77.92 74.41 92.62 75.02 C2F 74.02 70.33 64.99 77.43 79.53 73.84 92.71 76.12 Pretrain 76.27 71.59 67.21 78.67 82.34 77.45 95.04 78.37 Reminder Union 84.74 73.94 56.39 61.27 74.28 68.14 92.22 73.00 Daume 84.66 73.07 69.89 67.94 80.33 73.36 93.19 77.49 C2F 80.42 72.82 66.6 71.36 79.66 74.35 92.38 76.80 Pretrain 84.75 77.08 69.55 71.9 82.96 78.57 95.37 80.03 Communication Union 58.25 54.69 65.28 62.95 63.98 68.16 87.13 65.78 Daume 70.4 67.41 69.14 69.26 77.67 73.33 92.82 74.29 C2F 74.54 70.84 65.48 70.81 77.68 74.15 92.79 75.18 Pretrain 76.04 74.01 68.76 73.2 80.74 76.83 94.58 77.74 Places Union 71.7 67.56 45.37 53.93 67.78 63.67 92.88 66.13 Daume 75.69 69.01 66.11 65.46 79.01 78.42 94.43 75.45 C2F 78.9 71.64 66.93 71.26 79.2 79.19 93.75 77.27 Pretrain 76.8 74.12 67.5 72.7 81 81.89 97.18 78.74 Weather Union 69.43 58.53 56.76 66.66 74.98 77.53 73.74 68.23 Daume 75 71.73 66.54 71.17 79.36 80.57 75.86 74.32 C2F 77.61 71.47 63.24 71.49 78.44 79.43 73.73 73.63 Pretrain 77.37 74.5 68.23 73.5 80.96 82.05 80.11 76.67 Average Union 70.37 64.81 55.72 63.23 73.51 74.3 70.87 91.26 70.51 Daume 75.4 70.23 66.77 69.2 78.32 79.32 74.47 93.05 75.85 C2F 77.39 71.17 65.4 71.21 78.62 79.56 74.04 92.92 76.29 Pretrain 78.80 74.34 68.37 72.40 80.85 82.22 77.78 95.32 78.76 Table 5: Slot F1 scores of using Union, Daume, Coarse-to-Fine and pretraining on all pairs of source and target data. The numbers in boldface are the best performing adaptation technique in each pair. by two experienced annotators who have PhD in linguistics and machine learning. Each annotator first assigned mapping slot labels independently and then both annotators collaborated to reduce disagreement of their mapping results. Initially, the disagreement of their mapping rate between two annotators was about 30% because labels of slot tagging are very diverse; furthermore, in some cases it is not clear for human annotators if there exists a valid mapping. The results are shown at Table 3. Vector representation of Li et al. (2011) increases the F1 score slightly from 75.13 to 75.69 in C2F, but it does not help as much in cases that require bijective mapping: Daume, Union and Pretrain. In contrast, the proposed CCA based technique consistently outperforms the NoAdapt baselines by significant margins. More importantly, it also outperforms manual results under all conditions. It is perhaps not so surprising – the CCA derived mapping is completely data driven, while human annotators have nothing but the prior linguistic 480 knowledge about the slot tags and the domain. 6.4 Main Results The full results are shown in Table 5, where all pairs of source and target languages are considered for domain adaptation. It is clear from the table that we can always achieve better results using adaptation techniques than the non-adapted models trained only on the target data. Also, our proposed pretraining method outperforms other types of adaptation in most cases. The overall result of our experiments are shown in Table 4. In this experiment, we compare different adaptation techniques using our suggested CCA-based mapping. Here, except for NoAdapt, we use both the target and the nearest source domain data. To find the nearest domain, we first map fine grained label set to coarse label set by using the method described in Section 5.4.1 and then count how many coarse labels are used in a domain. And then we can find the nearest source domain by calculating the l2 distance between the multinomial distributions of the source domain and the target domain over the set of coarse labels. For example, for CALENDAR, we identify REMINDER as the nearest domain and vice versa because most of their labels are attributes related to time. In all experiments, the domain adapted models perform better than using only target domain data which achieves 75.1% F1 score. Simply combining source and target domain using our automatically mapped slot labels performs slightly better than baseline. C2F boosts the performance up to 77.61% and Daume is able to reach 78.99%.4 Finally, our proposed method, pretrain achieves nearly 81.02% F1 score. 7 Conclusion We presented an approach to take advantage of existing annotations when the data are similar but the label sets are different. This approach was based on label embeddings from CCA, which reduces the setting to a standard domain adaptation problem. Combined with a novel pretraining scheme applied to hidden-unit CRFs, our approach is shown to be superior to strong baselines in extensive experiments for slot tagging on eight distinct personal assistant domains. 4It is known that Daume is less beneficial when the source and target domains are similar due to the increased number of features. References Tasos Anastasakos, Young-Bum Kim, and Anoop Deoras. 2014. Task specific continuous word representations for mono and multi-lingual spoken language understanding. In Proceeding of the ICASSP, pages 3246–3250. IEEE. Rie Kubota Ando and Tong Zhang. 2005. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817–1853. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of the EMNLP, pages 120–128. Association for Computational Linguistics. Asli Celikyilmaz, Dilek Hakkani-Tur, Panupong Pasupat, and Ruhi Sarikaya. 2015. Enriching word embeddings using knowledge graph for semantic tagging in conversational dialog systems. AAAI - Association for the Advancement of Artificial Intelligence. Ciprian Chelba and Alex Acero. 2006. Adaptation of maximum entropy capitalizer: Little data can help a lot. Computer Speech & Language, 20(4):382–399. Minmin Chen, Kilian Q Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In Advances in neural information processing systems, pages 2456–2464. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the ICML, pages 160–167. ACM. Hal Daume III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, pages 101–126. Hal Daum´e III. 2007. Frustratingly easy domain adaptation. proceedings of the ACL, page 256. Renato De Mori, Fr´ed´eric Bechet, Dilek Hakkani-Tur, Michael McTear, Giuseppe Riccardi, and Gokhan Tur. 2008. Spoken language understanding. Signal Processing Magazine, IEEE, 25(3):50–58. Ali El-Kahky, Derek Liu, Ruhi Sarikaya, Gokhan Tur, Dilek Hakkani-Tur, and Larry Heck. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. IEEE, Proceedings of the ICASSP. Jenny Rose Finkel and Christopher D Manning. 2009. Hierarchical bayesian domain adaptation. In Proceedings of the ACL, pages 602–610. Association for Computational Linguistics. 481 George Foster, Cyril Goutte, and Roland Kuhn. 2010. Discriminative instance weighting for domain adaptation in statistical machine translation. In Proceedings of the EMNLP, pages 451–459. Association for Computational Linguistics. Harold Hotelling. 1936. Relations between two sets of variates. Biometrika, 28(3/4):321–377. Fei Huang and Alexander Yates. 2010. Exploring representation-learning approaches to domain adaptation. In Proceedings of the 2010 Workshop on Domain Adaptation for Natural Language Processing, pages 23–30. Association for Computational Linguistics. Minwoo Jeong and Gary Geunbae Lee. 2009. Multidomain spoken language understanding with transfer learning. Speech Communication, 51(5):412– 424. Jing Jiang and ChengXiang Zhai. 2007. Instance weighting for domain adaptation in nlp. In Proceedings of the ACL, volume 7, pages 264–271. Association for Computational Linguistics. Young-Bum Kim and Benjamin Snyder. 2012. Universal grapheme-to-phoneme prediction over latin alphabets. In Proceedings of the EMNLP, pages 332– 343, Jeju Island, South Korea, July. Association for Computational Linguistics. Young-Bum Kim and Benjamin Snyder. 2013. Unsupervised consonant-vowel prediction over hundreds of languages. In Proceedings of the ACL, pages 1527–1536. Association for Computational Linguistics. Young-Bum Kim, Jo˜ao V Grac¸a, and Benjamin Snyder. 2011. Universal morphological analysis using structured nearest neighbor prediction. In Proceedings of the EMNLP, pages 322–332. Association for Computational Linguistics. Young-Bum Kim, Minwoo Jeong, Karl Stratos, and Ruhi Sarikaya. 2015. Weakly supervised slot tagging with partially labeled sequences from web search click logs. In Proceedings of the NAACL. Association for Computational Linguistics. Abhishek Kumar, Avishek Saha, and Hal Daume. 2010. Co-regularization based semi-supervised domain adaptation. In Advances in Neural Information Processing Systems, pages 478–486. Hugo Larochelle and Yoshua Bengio. 2008. Classification using discriminative restricted boltzmann machines. In Proceedings of the ICML. Xiao Li, Ye-Yi Wang, and G¨okhan T¨ur. 2011. Multitask learning for spoken language understanding with shared slots. In Proceeding of the INTERSPEECH, pages 701–704. IEEE. Xiaohu Liu and Ruhi Sarikaya. 2014. A discriminative model based entity dictionary weighting approach for spoken language understanding. IEEE Institute of Electrical and Electronics Engineers. Yi Ma, Paul A. Crook, Ruhi Sarikaya, and Eric FoslerLussier. 2015. Knowledge graph inference for spoken dialog systems. In Proceedings of the ICASSP. IEEE. Laurens Maaten, Max Welling, and Lawrence K Saul. 2011. Hidden-unit conditional random fields. In International Conference on Artificial Intelligence and Statistics. Alex Marin, Roman Holenstein, Ruhi Sarikaya, and Mari Ostendorf. 2014. Learning phrase patterns for text classification using a knowledge graph and unlabeled data. ISCA - International Speech Communication Association. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proceedings of the NAACL, pages 28–36. Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Ruhi Sarikaya, Asli C, Anoop Deoras, and Minwoo Jeong. 2014. Shrinkage based features for slot tagging with conditional random fields. Proceeding of ISCA - International Speech Communication Association, September. Tobias Schnabel and Hinrich Sch¨utze. 2014. Flors: Fast and simple domain adaptation for part-ofspeech tagging. Transactions of the Association for Computational Linguistics, 2:15–26. Charles Sutton and Andrew McCallum. 2005. Composition of conditional random fields for transfer learning. In Proceedings of the EMNLP, pages 748–754. Association for Computational Linguistics. Gokhan Tur. 2006. Multitask learning for spoken language understanding. In Proceedings of the ICASSP, Toulouse, France. IEEE. Puyang Xu and Ruhi Sarikaya. 2013. Convolutional neural network based triangular crf for joint intent detection and slot filling. In Automatic Speech Recognition and Understanding (ASRU), pages 78– 83. IEEE. Puyang Xu and Ruhi Sarikaya. 2014. Targeted feature dropout for robust slot filling in natural language understanding. ISCA - International Speech Communication Association. 482
2015
46
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 483–494, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Matrix Factorization with Knowledge Graph Propagation for Unsupervised Spoken Language Understanding Yun-Nung Chen, William Yang Wang, Anatole Gershman, and Alexander I. Rudnicky School of Computer Science, Carnegie Mellon University 5000 Forbes Aveue, Pittsburgh, PA 15213-3891, USA {yvchen, yww, anatoleg, air}@cs.cmu.edu Abstract Spoken dialogue systems (SDS) typically require a predefined semantic ontology to train a spoken language understanding (SLU) module. In addition to the annotation cost, a key challenge for designing such an ontology is to define a coherent slot set while considering their complex relations. This paper introduces a novel matrix factorization (MF) approach to learn latent feature vectors for utterances and semantic elements without the need of corpus annotations. Specifically, our model learns the semantic slots for a domain-specific SDS in an unsupervised fashion, and carries out semantic parsing using latent MF techniques. To further consider the global semantic structure, such as inter-word and inter-slot relations, we augment the latent MF-based model with a knowledge graph propagation model based on a slot-based semantic graph and a word-based lexical graph. Our experiments show that the proposed MF approaches produce better SLU models that are able to predict semantic slots and word patterns taking into account their relations and domain-specificity in a joint manner. 1 Introduction A key component of a spoken dialogue system (SDS) is the spoken language understanding (SLU) module—it parses the users’ utterances into semantic representations; for example, the utterance “find a cheap restaurant” can be parsed into (price=cheap, target=restaurant) (Pieraccini et al., 1992). To design the SLU module of a SDS, most previous studies relied on predefined slots1 for training the decoder (Seneff, 1992; Dowding 1A slot is defined as a basic semantic unit in SLU, such as “price” and “target” in the example. et al., 1993; Gupta et al., 2006; Bohus and Rudnicky, 2009). However, these predefined semantic slots may bias the subsequent data collection process, and the cost of manually labeling utterances for updating the ontology is expensive (Wang et al., 2012). In recent years, this problem led to the development of unsupervised SLU techniques (Heck and Hakkani-T¨ur, 2012; Heck et al., 2013; Chen et al., 2013b; Chen et al., 2014b). In particular, Chen et al. (2013b) proposed a frame-semantics based framework for automatically inducing semantic slots given raw audios. However, these approaches generally do not explicitly learn the latent factor representations to model the measurement errors (Skrondal and Rabe-Hesketh, 2004), nor do they jointly consider the complex lexical, syntactic, and semantic relations among words, slots, and utterances. Another challenge of SLU is the inference of the hidden semantics. Considering the user utterance “can i have a cheap restaurant”, from its surface patterns, we can see that it includes explicit semantic information about “price (cheap)” and “target (restaurant)”; however, it also includes hidden semantic information, such as “food” and “seeking”, since the SDS needs to infer that the user wants to “find” some cheap “food”, even though they are not directly observed in the surface patterns. Nonetheless, these implicit semantics are important semantic concepts for domainspecific SDSs. Traditional SLU models use discriminative classifiers (Henderson et al., 2012) to predict whether the predefined slots occur in the utterances or not, ignoring the unobserved concepts and the hidden semantic information. In this paper, we take a rather radical approach: we propose a novel matrix factorization (MF) model for learning latent features for SLU, taking account of additional information such as the word relations, the induced slots, and the slot relations. To further consider the global coherence of induced slots, we combine the MF model with 483 a knowledge graph propagation based model, fusing both a word-based lexical knowledge graph and a slot-based semantic graph. In fact, as it is shown in the Netflix challenge, MF is credited as the most useful technique for recommendation systems (Koren et al., 2009). Also, the MF model considers the unobserved patterns and estimates their probabilities instead of viewing them as negative examples. However, to the best of our knowledge, the MF technique is not yet well understood in the SLU and SDS communities, and it is not very straight-forward to use MF methods to learn latent feature representations for semantic parsing in SLU. To evaluate the performance of our model, we compare it to standard discriminative SLU baselines, and show that our MF-based model is able to produce strong results in semantic decoding, and the knowledge graph propagation model further improves the performance. Our contributions are three-fold: • We are among the first to study matrix factorization techniques for unsupervised SLU, taking account of additional information; • We augment the MF model with a knowledge graph propagation model, increasing the global coherence of semantic decoding using induced slots; • Our experimental results show that the MFbased unsupervised SLU outperforms strong discriminative baselines, obtaining promising results. In the next section, we outline the related work in unsupervised SLU and latent variable modeling for spoken language processing. Section 3 introduces our framework. The detailed MF approach is explained in Section 4. We then introduce the global knowledge graphs for MF in Section 5. Section 6 shows the experimental results, and Section 7 concludes. 2 Related Work Unsupervised SLU Tur et al. (2011; 2012) were among the first to consider unsupervised approaches for SLU, where they exploited query logs for slot-filling. In a subsequent study, Heck and Hakkani-T¨ur (2012) studied the Semantic Web for an unsupervised intent detection problem in SLU, showing that results obtained from the unsupervised training process align well with the performance of traditional supervised learning. Following their success of unsupervised SLU, recent studies have also obtained interesting results on the tasks of relation detection (Hakkani-T¨ur et al., 2013; Chen et al., 2014a), entity extraction (Wang et al., 2014), and extending domain coverage (ElKahky et al., 2014; Chen and Rudnicky, 2014). However, most of the studies above do not explicitly learn latent factor representations from the data—while we hypothesize that the better robustness in noisy data can be achieved by explicitly modeling the measurement errors (usually produced by automatic speech recognizers (ASR)) using latent variable models and taking additional local and global semantic constraints into account. Latent Variable Modeling in SLU Early studies on latent variable modeling in speech included the classic hidden Markov model for statistical speech recognition (Jelinek, 1997). Recently, Celikyilmaz et al. (2011) were the first to study the intent detection problem using query logs and a discrete Bayesian latent variable model. In the field of dialogue modeling, the partially observable Markov decision process (POMDP) (Young et al., 2013) model is a popular technique for dialogue management, reducing the cost of handcrafted dialogue managers while producing robustness against speech recognition errors. More recently, Tur et al. (2013) used a semi-supervised LDA model to show improvement on the slot filling task. Also, Zhai and Williams (2014) proposed an unsupervised model for connecting words with latent states in HMMs using topic models, obtaining interesting qualitative and quantitative results. However, for unsupervised learning for SLU, it is not obvious how to incorporate additional information in the HMMs. To the best of our knowledge, this paper is the first to consider MF techniques for learning latent feature representations in unsupervised SLU, taking various local and global lexical, syntactic, and semantic information into account. 3 The Proposed Framework This paper introduces a matrix factorization technique for unsupervised SLU,. The proposed framework is shown in Figure 1(a). Given the utterances, the task of the SLU model is to decode their surface patterns into semantic forms and differentiate the target semantic concepts from the generic semantic space for task-oriented SDSs simultaneously. Note that our model does not require any human-defined slots and domainspecific semantic representations for utterances. In the proposed model, we first build a feature matrix to represent the training utterances, where each row represents an utterance, and each column refers to an observed surface pattern or a induced slot candidate. Figure 1(b) illustrates an example 484 1 Utterance 1 i would like a cheap restaurant Word Observation Slot Candidate Train … … … cheap restaurant food expensiveness 1 locale_by_use 1 1 find a restaurant with chinese food Utterance 2 1 1 food 1 1 1 Test 1 1 .97 .90 .95 .85 .93 .92 .98 .05 .05 Word Relation Model Slot Relation Model Reasoning with Matrix Factorization Slot Induction SLU Model Semantic Representation “can I have a cheap restaurant” Slot Induction Unlabeled Collection SLU Model Training by Matrix Factorization FrameSemantic Parsing Fw Fs Feature Model Rw Rs Knowledge Graph Propagation Model Word Relation Model Slot Relation Model Knowledge Graph Construction . (a) (b) Semantic KG Lexical KG Figure 1: (a): The proposed framework. (b): Our matrix factorization method completes a partiallymissing matrix for implicit semantic parsing. Dark circles are observed facts, shaded circles are inferred facts. The slot induction maps (yellow arrow) observed surface patterns to semantic slot candidates. The word relation model (blue arrow) constructs correlations between surface patterns. The slot relation model (pink arrow) learns the slot-level correlations based on propagating the automatically derived semantic knowledge graphs. Reasoning with matrix factorization (gray arrow) incorporates these models jointly, and produces a coherent, domain-specific SLU model. of the matrix. Given a testing utterance, we convert it into a vector based on the observed surface patterns, and then fill in the missing values of the slots. In the first utterance in the figure, although the semantic slot food is not observed, the utterance implies the meaning facet food. The MF approach is able to learn the latent feature vectors for utterances and semantic elements, inferring implicit semantic concepts to improve the decoding process—namely, by filling the matrix with probabilities (lower part of the matrix). The feature model is built on the observed word patterns and slot candidates, where the slot candidates are obtained from the slot induction component through frame-semantic parsing (the yellow block in Figure 1(a)) (Chen et al., 2013b). Section 4.1 explains the detail of the feature model. In order to consider the additional inter-word and inter-slot relations, we propose a knowledge graph propagation model based on two knowledge graphs, which includes a word relation model (blue block) and a slot relation model (pink block), described in Section 4.2. The method of automatic knowledge graph construction is introduced in Section 5, where we leverage distributed word embeddings associated with typed syntactic dependencies to model the relations (Mikolov et al., 2013b; Mikolov et al., 2013c; Levy and Goldberg, 2014; Chen et al., 2015). Finally, we train the SLU model by learning latent feature vectors for utterances and slot candidates through MF techniques. Combining with a knowledge graph propagation model based on word/slot relations, the trained SLU model estimates the probability that each semantic slot occurs in the testing utterance, and how likely each slot is domain-specific simultaneously. In other words, the SLU model is able to transform the testing utterances into domain-specific semantic representations without human involvement. 4 The Matrix Factorization Approach Considering the benefits brought by MF techniques, including 1) modeling the noisy data, 2) modeling hidden semantics, and 3) modeling the 485 can i have a cheap restaurant Frame: capability FT LU: can FE Filler: i Frame: expensiveness FT LU: cheap Frame: locale by use FT/FE LU: restaurant Figure 2: An example of probabilistic framesemantic parsing on ASR output. FT: frame target. FE: frame element. LU: lexical unit. long-range dependencies between observations, in this work we apply an MF approach to SLU modeling for SDSs. In our model, we use U to denote the set of input utterances, W as the set of word patterns, and S as the set of semantic slots that we would like to predict. The pair of an utterance u ∈U and a word pattern/semantic slot x ∈{W + S}, ⟨u, x⟩, is a fact. The input to our model is a set of observed facts O, and the observed facts for a given utterance is denoted by {⟨u, x⟩∈O}. The goal of our model is to estimate, for a given utterance u and a given word pattern/semantic slot x, the probability, p(Mu,x = 1), where Mu,x is a binary random variable that is true if and only if x is the word pattern/domain-specific semantic slot in the utterance u. We introduce a series of exponential family models that estimate the probability using a natural parameter θu,x and the logistic sigmoid function: p(Mu,x = 1 | θu,x) = σ(θu,x) = 1 1 + exp (−θu,x) (1) We construct a matrix M|U|×(|W|+|S|) as observed facts for MF by integrating a feature model and a knowledge graph propagation model below. 4.1 Feature Model First, we build a word pattern matrix Fw with binary values based on observations, where each row represents an utterance and each column refers to an observed unigram. In other words, Fw carries the basic word vectors for the utterances, which is illustrated as the left part of the matrix in Figure 1(b). To induce the semantic elements, we parse all ASR-decoded utterances in our corpus using SEMAFOR2, a state-of-the-art semantic parser for frame-semantic parsing (Das et al., 2010; Das et al., 2013), and extract all frames from semantic parsing results as slot candidates (Chen et al., 2013b; Dinarelli et al., 2009). Figure 2 shows an example of an ASR-decoded output parsed by SEMAFOR. Three FrameNet-defined frames 2http://www.ark.cs.cmu.edu/SEMAFOR/ (capability, expensiveness, and locale by use) are generated for the utterance, which we consider as slot candidates for a domain-specific dialogue system (Baker et al., 1998). Then we build a slot matrix Fs with binary values based on the induced slots, which also denotes the slot features for the utterances (right part of the matrix in Figure 1(b)). To build the feature model MF , we concatenate two matrices: MF = [ Fw Fs ], (2) which is the upper part of the matrix in Figure 1(b) for training utterances. Note that we do not use any annotations, so all slot candidates are included. 4.2 Knowledge Graph Propagation Model Since SEMAFOR was trained on FrameNet annotation, which has a more generic frame-semantic context, not all the frames from the parsing results can be used as the actual slots in the domainspecific dialogue systems. For instance, in Figure 2, we see that the frames “expensiveness” and “locale by use” are essentially the key slots for the purpose of understanding in the restaurant query domain, whereas the “capability” frame does not convey particularly valuable information for SLU. Assuming that domain-specific concepts are usually related to each other, considering global relations between semantic slots induces a more coherent slot set. It is shown that the relations on knowledge graphs help make decisions on domain-specific slots (Chen et al., 2015). Considering two directed graphs, semantic and lexical knowledge graphs, each node in the semantic knowledge graph is a slot candidate si generated by the frame-semantic parser, and each node in the lexical knowledge graph is a word wj. • Slot-based semantic knowledge graph is built as Gs = ⟨Vs, Ess⟩, where Vs = {si ∈ S} and Ess = {eij | si, sj ∈Vs}. • Word-based lexical knowledge graph is built as Gw = ⟨Vw, Eww⟩, where Vw = {wi ∈W} and Eww = {eij | wi, wj ∈Vw}. The edges connect two nodes in the graphs if there is a typed dependency between them. Figure 3 is a simplified example of a slot-based semantic knowledge graph. The structured graph helps define a coherent slot set. To model the relations between words/slots based on the knowledge graphs, we define two relation models below. 486 locale_by_use food expensiveness seeking relational_quantity PREP_FOR PREP_FOR NN AMOD AMOD AMOD Figure 3: A simplified example of the automatically derived knowledge graph. • Semantic Relation For modeling word semantic relations, we compute a matrix RS w = [Sim(wi, wj)]|W|×|W|, where Sim(wi, wj) is the cosine similarity between the dependency embeddings of the word patterns wi and wj after normalization. For slot semantic relations, we compute RS s = [Sim(si, sj)]|S|×|S| similarly3. The matrices RS w and RS s model not only the semantic but functional similarity since we use dependency-based embeddings (Levy and Goldberg, 2014). • Dependency Relation Assuming that important semantic slots are usually mutually related to each other, that is, connected by syntactic dependencies, our automatically derived knowledge graphs are able to help model the dependency relations. For word dependency relations, we compute a matrix RD w = [ˆr(wi, wj)]|W|×|W|, where ˆr(wi, wj) measures the dependency between two word patterns wi and wj based on the word-based lexical knowledge graph, and the detail is described in Section 5. For slot dependency relations, we similarly compute RD s = [ˆr(si, sj)]|S|×|S| based on the slotbased semantic knowledge graph. With the built word relation models (RS w and RD w ) and slot relation models (RS s and RD s ), we combine them as a knowledge graph propagation matrix MR4. MR = h RSD w 0 0 RSD s i , (3) 3For each column in RS w and RS s , we only keep top 10 highest values, which correspond the top 10 semantically similar nodes. 4The values in the diagonal of MR are 0 to model the propagation from other entries. where RSD w = RS w +RD w and RSD s = RS s +RD s to integrate semantic and dependency relations. The goal of this matrix is to propagate scores between nodes according to different types of relations in the knowledge graphs (Chen and Metze, 2012). 4.3 Integrated Model With a feature model MF and a knowledge graph propagation model MR, we integrate them into a single matrix. M = MF · (αI + βMR) (4) = h αFw + βFwRw 0 0 αFs + βFsRs i , where M is the final matrix and I is the identity matrix. α and β are the weights for balancing original values and propagated values, where α + β = 1. The matrix M is similar to MF , but some weights are enhanced through the knowledge graph propagation model, MR. The word relations are built by FwRw, which is the matrix with internal weight propagation on the lexical knowledge graph (the blue arrow in Figure 1(b)). Similarly, FsRs models the slot correlations, and can be treated as the matrix with internal weight propagation on the semantic knowledge graph (the pink arrow in Figure 1(b)). The propagation models can be treated as running a random walk algorithm on the graphs. Fs contains all slot candidates generated by SEMAFOR, which may include some generic slots (such as capability), so the original feature model cannot differentiate the domain-specific and generic concepts. By integrating with Rs, the semantic and dependency relations can be propagated via the knowledge graph, and the domainspecific concepts may have higher weights based on the assumption that the slots for dialogue systems are often mutually related (Chen et al., 2015). Hence, the structure information can be automatically involved in the matrix. Also, the word relation model brings the same function, but now on the word level. In conclusion, for each utterance, the integrated model not only predicts the probability that semantic slots occur but also considers whether the slots are domain-specific. The following sections describe the learning process. 4.4 Parameter Estimation The proposed model is parameterized through weights and latent component vectors, where the parameters are estimated by maximizing the log 487 likelihood of observed data (Collins et al., 2001). θ∗ = arg max θ Y u∈U p(θ | Mu) (5) = arg max θ Y u∈U p(Mu | θ)p(θ) = arg max θ X u∈U ln p(Mu | θ) −λθ, where Mu is the vector corresponding to the utterance u from Mu,x in (1), because we assume that each utterance is independent of others. To avoid treating unobserved facts as designed negative facts, we consider our positive-only data as implicit feedback. Bayesian Personalized Ranking (BPR) is an optimization criterion that learns from implicit feedback for MF, which uses a variant of the ranking: giving observed true facts higher scores than unobserved (true or false) facts (Rendle et al., 2009). Riedel et al. (2013) also showed that BPR learns the implicit relations for improving the relation extraction task. 4.4.1 Objective Function To estimate the parameters in (5), we create a dataset of ranked pairs from M in (4): for each utterance u and each observed fact f+ = ⟨u, x+⟩, where Mu,x ≥δ, we choose each word pattern/slot x−such that f− = ⟨u, x−⟩, where Mu,x < δ, which refers to the word pattern/slot we have not observed to be in utterance u. That is, we construct the observed data O from M. Then for each pair of facts f+ and f−, we want to model p(f+) > p(f−) and hence θf+ > θf−according to (1). BPR maximizes the summation of each ranked pair, where the objective is X u∈U ln p(Mu | θ) = X f+∈O X f−̸∈O ln σ(θf+ −θf−). (6) The BPR objective is an approximation to the per utterance AUC (area under the ROC curve), which directly correlates to what we want to achieve – well-ranked semantic slots per utterance. 4.4.2 Optimization To maximize the objective in (6), we employ a stochastic gradient descent (SGD) algorithm (Rendle et al., 2009). For each randomly sampled observed fact ⟨u, x+⟩, we sample an unobserved fact ⟨u, x−⟩, which results in |O| fact pairs ⟨f−, f+⟩. For each pair, we perform an SGD update using the gradient of the corresponding objective function for matrix factorization (Gantner et al., 2011). can i have a cheap restaurant ccomp amod dobj nsubj det capability expensiveness locale_by_use Figure 4: The dependency parsing result. 5 Knowledge Graph Construction This section introduces the procedure of constructing knowledge graphs in order to estimate ˆr(wi, wj) for building RD w and ˆr(si, sj) for RD s in Section 4.2. Considering the relations in the knowledge graphs, the edge weights for Eww and Ess are measured as ˆr(wi, wj) and ˆr(si, sj) based on the dependency parsing results respectively. The example utterance “can i have a cheap restaurant” and its dependency parsing result are illustrated in Figure 4. The arrows denote the dependency relations from headwords to their dependents, and words on arcs denote types of the dependencies. All typed dependencies between two words are encoded in triples and form a word-based dependency set Tw = {⟨wi, t, wj⟩}, where t is the typed dependency between the headword wi and the dependent wj. For example, Figure 4 generates ⟨restaurant, AMOD, cheap⟩, ⟨restaurant, DOBJ, have⟩, etc. for Tw, Similarly, we build a slot-based dependency set Ts = {⟨si, t, sj⟩} by transforming dependencies between slot-fillers into ones between slots. For example, ⟨restaurant, AMOD, cheap⟩ from Tw is transformed into ⟨locale by use, AMOD, expensiveness⟩ for building Ts, because both sides of the non-dotted line are parsed as slot-fillers by SEMAFOR. 5.1 Relation Weight Estimation For the edges in the knowledge graphs, we model the relations between two connected nodes xi and xj as ˆr(xi, xj), where x is either a slot s or a word pattern w. Since the weights are measured based on the relations between nodes regardless of the directions, we combine the scores of two directional dependencies: ˆr(xi, xj) = r(xi →xj) + r(xj →xi), (7) where r(xi →xj) is the score estimating the dependency including xi as a head and xj as a dependent. We propose a scoring function for r(·) using dependency-based embeddings. 488 Table 1: The example contexts extracted for training dependency-based word/slot embeddings. Typed Dependency Relation Target Word Contexts Word ⟨restaurant, AMOD, cheap⟩ restaurant cheap/AMOD cheap restaurant/AMOD−1 Slot ⟨locale by use, AMOD, expensiveness⟩ locale by use expensiveness/AMOD expansiveness locale by use/AMOD−1 5.1.1 Dependency-Based Embeddings Most neural embeddings use linear bag-of-words contexts, where a window size is defined to produce contexts of the target words (Mikolov et al., 2013c; Mikolov et al., 2013b; Mikolov et al., 2013a). However, some important contexts may be missing due to smaller windows, while larger windows capture broad topical content. A dependency-based embedding approach was proposed to derive contexts based on the syntactic relations the word participates in for training embeddings, where the embeddings are less topical but offer more functional similarity compared to original embeddings (Levy and Goldberg, 2014). Table 1 shows the extracted dependency-based contexts for each target word from the example in Figure 4, where headwords and their dependents can form the contexts by following the arc on a word in the dependency tree, and −1 denotes the directionality of the dependency. After replacing original bag-of-words contexts with dependencybased contexts, we can train dependency-based embeddings for all target words (Yih et al., 2014; Bordes et al., 2011; Bordes et al., 2013). For training dependency-based word embeddings, each target x is associated with a vector vx ∈Rd and each context c is represented as a context vector vc ∈Rd, where d is the embedding dimensionality. We learn vector representations for both targets and contexts such that the dot product vx · vc associated with “good” targetcontext pairs belonging to the training data D is maximized, leading to the objective function: arg max vx,vc X (w,c)∈D log 1 1 + exp(−vc · vx), (8) which can be trained using stochastic-gradient updates (Levy and Goldberg, 2014). Then we can obtain the dependency-based slot and word embeddings using Ts and Tw respectively. 5.1.2 Embedding-Based Scoring Function With trained dependency-based embeddings, we estimate the probability that xi is the headword and xj is its dependent via the typed dependency t as P(xi −→ t xj) = Sim(xi, xj/t) + Sim(xj, xi/t−1) 2 , (9) where Sim(xi, xj/t) is the cosine similarity between word/slot embeddings vxi and context embeddings vxj/t after normalizing to [0, 1]. Based on the dependency set Tx, we use t∗ xi→xj to denote the most possible typed dependency with xi as a head and xj as a dependent. t∗ xi→xj = arg max t C(xi −→ t xj), (10) where C(xi −→ t xj) counts how many times the dependency ⟨xi, t, xj⟩occurs in the dependency set Tx. Then the scoring function r(·) in (7) that estimates the dependency xi →xj is measured as r(xi →xj) = C(xi −−−−→ t∗xi→xj xj)·P(xi −−−−→ t∗xi→xj xj), (11) which is equal to the highest observed frequency of the dependency xi →xj among all types from Tx and additionally weighted by the estimated probability. The estimated probability smoothes the observed frequency to avoid overfitting due to the smaller dataset. Figure 3 is a simplified example of an automatically derived semantic knowledge graph with the most possible typed dependencies as edges based on the estimated weights. Then the relation weights ˆr(xi, xj) can be obtained by (7) in order to build RD w and RD s matrices. 6 Experiments 6.1 Experimental Setup In this experiment, we used the Cambridge University SLU corpus, previously used on several other SLU tasks (Henderson et al., 2012; Chen et al., 2013a). The domain of the corpus is about restaurant recommendation in Cambridge; subjects were asked to interact with multiple SDSs in an in-car setting. The corpus contains a total number of 2,166 dialogues, including 15,453 utterances (10,571 for self-training and 4,882 for 489 Table 2: The MAP of predicted slots (%); † indicates that the result is significantly better than the Logistic Regression (row (b)) with p < 0.05 in t-test. Approach ASR Manual w/o w/ Explicit w/o w/ Explicit Explicit SVM (a) 32.48 36.62 MLR (b) 33.96 38.78 Implicit Baseline Random (c) 3.43 22.45 2.63 25.09 Majority (d) 15.37 32.88 16.43 38.41 MF Feature (e) 24.24 37.61† 22.55 45.34† Feature + KGP (f) 40.46† 43.51† 52.14† 53.40† speak on topic addr area food phone part orientational direction locale part inner outer food origin contacting postcode price range task type sending commerce scenario expensiveness range seeking desiring locating locale by use building Figure 5: The mappings from induced slots (within blocks) to reference slots (right sides of arrows). testing). The data is gender-balanced, with slightly more native than non-native speakers. The vocabulary size is 1868. An ASR system was used to transcribe the speech; the word error rate was reported as 37%. There are 10 slots created by domain experts: addr, area, food, name, phone, postcode, price range, signature, task, and type. For parameter setting, the weights for balancing feature models and propagation models, α and β, are set as 0.5 to give the same influence, and the threshold for defining the unobserved facts δ is set as 0.5 for all experiments. We use the Stanford Parser5 to obtain the collapsed typed syntactic dependencies (Socher et al., 2013) and set the dimensionality of embeddings d = 300 in all experiments. 6.2 Evaluation Metrics To evaluate the accuracy of the automatically decoded slots, we measure their quality as the proximity between predicted slots and reference slots. Figure 5 shows the mappings that indicate semantically related induced slots and reference slots (Chen et al., 2013b). To eliminate the influence of threshold selection when predicting semantic slots, in the following 5http://nlp.stanford.edu/software/lex-parser. shtml metrics, we take the whole ranking list into account and evaluate the performance by the metrics that are independent of the selected threshold. For each utterance, with the predicted probabilities of all slot candidates, we can compute an average precision (AP) to evaluate the performance of SLU by treating the slots with mappings as positive. AP scores the ranking result higher if the correct slots are ranked higher, which also approximates to the area under the precision-recall curve (Boyd et al., 2012). Mean average precision (MAP) is the metric for evaluating all utterances. For all experiments, we perform a paired t-test on the AP scores of the results to test the significance. 6.3 Evaluation Results Table 2 shows the MAP performance of predicted slots for all experiments on ASR and manual transcripts. For the first baseline using explicit semantics, we use the observed data to self-train models for predicting the probability of each semantic slot by support vector machine (SVM) with a linear kernel and multinomial logistic regression (MLR) (row (a)-(b)) (Pedregosa et al., 2011; Henderson et al., 2012). It is shown that SVM and MLR perform similarly, and MLR is slightly better than SVM because it has better capability of estimating probabilities. For modeling implicit semantics, two baselines are performed as references, Random (row (c)) and Majority (row (d)), where the former assigns random probabilities for all slots, and the later assigns probabilities for the slots based on their frequency distribution. To improve probability estimation, we further integrate the results from implicit semantics with the better result from explicit approaches, MLR (row (b)), by averaging the probability distribution from two results. Two baselines, Random and Majority, cannot model the implicit semantics, producing poor results. The results of Random integrated with MLR significantly degrades the performance of 490 Table 3: The MAP of predicted slots using different types of relation models in MR (%); † indicates that the result is significantly better than the feature model (column (a)) with p < 0.05 in t-test. Model Feature Knowledge Graph Propagation Model Rel. (a) None (b) Semantic (c) Dependency (d) Word (e) Slot (f) All MR h RS w 0 0 RS s i h RD w 0 0 RD s i h RSD w 0 0 0 i h 0 0 0 RSD s i h RSD w 0 0 RSD s i ASR 37.61 41.39† 41.63† 39.19† 42.10† 43.51† Manual 45.34 51.55† 49.04† 45.18 49.91† 53.40† MLR for both ASR and manual transcripts. Also, the results of Majority integrated with MLR does not produce any difference compared to the MLR baseline. Among the proposed MF approaches, only using feature model for building the matrix (row (e)) achieves 24.2% and 22.6% of MAP for ASR and manual results respectively, which are worse than two baselines using explicit semantics. However, with the combination of explicit semantics, using only the feature model significantly outperforms the baselines, where the performance comes from about 34.0% to 37.6% and from 38.8% to 45.3% for ASR and manual results respectively. Additionally integrating a knowledge graph propagation (KGP) model (row (e)) outperforms the baselines for both ASR and manual transcripts, and the performance is further improved by combining with explicit semantics (achieving MAP of 43.5% and 53.4%). The experiments show that the proposed MF models successfully learn the implicit semantics and consider the relations and domain-specificity simultaneously. 6.4 Discussion and Analysis With promising results obtained by the proposed models, we analyze the detailed difference between different relation models in Table 3. 6.4.1 Effectiveness of Semantic and Dependency Relation Models To evaluate the effectiveness of semantic and dependency relations, we consider each of them individually in MR of (3) (columns (b) and (c) in Table 3). Comparing to the original model (column (a)), both modeling semantic relations and modeling dependency relations significantly improve the performance for ASR and manual results. It is shown that semantic relations help the SLU model infer the implicit meaning, and then the prediction becomes more accurate. Also, dependency relations successfully differentiate the generic concepts from the domain-specific concepts, so that the SLU model is able to predict more coherent set of semantic slots (Chen et al., 2015). Integrating two types of relations (column (f)) further improves the performance. 6.4.2 Comparing Word/ Slot Relation Models To analyze the performance results from interword and inter-slot relations, the columns (d) and (e) show the results considering only word relations and only slot relations respectively. It can be seen that the inter-slot relation model significantly improves the performance for both ASR and manual results. However, the inter-word relation model only performs slightly better results for ASR output (from 37.6% to 39.2%), and there is no difference after applying the inter-word relation model on manual transcripts. The reason may be that inter-slot relations carry high-level semantics that align well with the structure of SDSs, but inter-word relations do not. Nevertheless, combining two relations (column (f)) outperforms both results for ASR and manual transcripts, showing that different types of relations can compensate each other and then benefit the SLU performance. 7 Conclusions This paper presents an MF approach to self-train the SLU model for semantic decoding in an unsupervised way. The purpose of the proposed model is not only to predict the probability of each semantic slot but also to distinguish between generic semantic concepts and domain-specific concepts that are related to an SDS. The experiments show that the MF-based model obtains promising results, outperforming strong discriminative baselines. Acknowledgments We thank anonymous reviewers for their useful comments and Prof. Manfred Stede for his mentoring. We are also grateful to MetLife’s support. Any opinions, findings, and conclusions expressed in this publication are those of the authors and do not necessarily reflect the views of funding agencies. 491 References Collin F Baker, Charles J Fillmore, and John B Lowe. 1998. The Berkeley FrameNet project. In Proceedings of COLING, pages 86–90. Dan Bohus and Alexander I Rudnicky. 2009. The RavenClaw dialog management framework: Architecture and systems. Computer Speech & Language, 23(3):332–361. Antoine Bordes, Jason Weston, Ronan Collobert, Yoshua Bengio, et al. 2011. Learning structured embeddings of knowledge bases. In Proceedings of AAAI. Antoine Bordes, Nicolas Usunier, Alberto GarciaDuran, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Proceedings of Advances in Neural Information Processing Systems, pages 2787– 2795. Kendrick Boyd, Vitor Santos Costa, Jesse Davis, and C David Page. 2012. Unachievable region in precision-recall space and its effect on empirical evaluation. In Machine learning: proceedings of the International Conference. International Conference on Machine Learning, volume 2012, page 349. NIH Public Access. Asli Celikyilmaz, Dilek Hakkani-T¨ur, and Gokhan T¨ur. 2011. Leveraging web query logs to learn user intent via bayesian discrete latent variable model. In Proceedings of ICML. Yun-Nung Chen and Florian Metze. 2012. Twolayer mutually reinforced random walk for improved multi-party meeting summarization. In Proceedings of The 4th IEEE Workshop on Spoken Language Tachnology, pages 461–466. Yun-Nung Chen and Alexander I. Rudnicky. 2014. Dynamically supporting unexplored domains in conversational interactions by enriching semantics with neural word embeddings. In Proceedings of 2014 IEEE Spoken Language Technology Workshop (SLT), pages 590–595. IEEE. Yun-Nung Chen, William Yang Wang, and Alexander I. Rudnicky. 2013a. An empirical investigation of sparse log-linear models for improved dialogue act classification. In Proceedings of ICASSP, pages 8317–8321. Yun-Nung Chen, William Yang Wang, and Alexander I Rudnicky. 2013b. Unsupervised induction and filling of semantic slots for spoken dialogue systems using frame-semantic parsing. In Proceedings of 2013 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU), pages 120–125. IEEE. Yun-Nung Chen, Dilek Hakkani-T¨ur, and Gokan Tur. 2014a. Deriving local relational surface forms from dependency-based entity embeddings for unsupervised spoken language understanding. In Proceedings of 2014 IEEE Spoken Language Technology Workshop (SLT), pages 242–247. IEEE. Yun-Nung Chen, William Yang Wang, and Alexander I. Rudnicky. 2014b. Leveraging frame semantics and distributional semantics for unsupervised semantic slot induction in spoken dialogue systems. In Proceedings of 2014 IEEE Spoken Language Technology Workshop (SLT), pages 584–589. IEEE. Yun-Nung Chen, William Yang Wang, and Alexander I. Rudnicky. 2015. Jointly modeling inter-slot relations by random walk on knowledge graphs for unsupervised spoken language understanding. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics - Human Language Technologies. ACL. Michael Collins, Sanjoy Dasgupta, and Robert E Schapire. 2001. A generalization of principal components analysis to the exponential family. In Proceedings of Advances in Neural Information Processing Systems, pages 617–624. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A Smith. 2010. Probabilistic frame-semantic parsing. In Proceedings of The Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 948–956. Dipanjan Das, Desai Chen, Andr´e F. T. Martins, Nathan Schneider, and Noah A. Smith. 2013. Frame-semantic parsing. Computational Linguistics. Marco Dinarelli, Silvia Quarteroni, Sara Tonelli, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Annotating spoken dialogs: from speech segments to dialog acts and frame semantics. In Proceedings of the 2nd Workshop on Semantic Representation of Spoken Language, pages 34–41. ACL. John Dowding, Jean Mark Gawron, Doug Appelt, John Bear, Lynn Cherny, Robert Moore, and Douglas Moran. 1993. Gemini: A natural language system for spoken-language understanding. In Proceedings of ACL, pages 54–61. Ali El-Kahky, Derek Liu, Ruhi Sarikaya, G¨okhan T¨ur, Dilek Hakkani-T¨ur, and Larry Heck. 2014. Extending domain coverage of language understanding systems via intent transfer between domains using knowledge graphs and search query click logs. In Proceedings of ICASSP. Zeno Gantner, Steffen Rendle, Christoph Freudenthaler, and Lars Schmidt-Thieme. 2011. Mymedialite: A free recommender system library. In Proceedings of the fifth ACM conference on Recommender systems, pages 305–308. ACM. 492 Narendra Gupta, G¨okhan T¨ur, Dilek Hakkani-T¨ur, Srinivas Bangalore, Giuseppe Riccardi, and Mazin Gilbert. 2006. The AT&T spoken language understanding system. IEEE Transactions on Audio, Speech, and Language Processing, 14(1):213–222. Dilek Hakkani-T¨ur, Larry Heck, and Gokhan Tur. 2013. Using a knowledge graph and query click logs for unsupervised learning of relation detection. In Proceedings of ICASSP, pages 8327–8331. Larry Heck and Dilek Hakkani-T¨ur. 2012. Exploiting the semantic web for unsupervised spoken language understanding. In Proceedings of SLT, pages 228– 233. Larry P Heck, Dilek Hakkani-T¨ur, and Gokhan Tur. 2013. Leveraging knowledge graphs for web-scale unsupervised semantic parsing. In Proceedings of INTERSPEECH, pages 1594–1598. Matthew Henderson, Milica Gasic, Blaise Thomson, Pirros Tsiakoulis, Kai Yu, and Steve Young. 2012. Discriminative spoken language understanding using word confusion networks. In Proceedings of SLT, pages 176–181. Frederick Jelinek. 1997. Statistical methods for speech recognition. MIT press. Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer, (8):30–37. Omer Levy and Yoav Goldberg. 2014. Dependencybased word embeddings. In Proceedings of ACL. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. In Proceedings of Workshop at ICLR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proceedings of Advances in Neural Information Processing Systems, pages 3111–3119. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continuous space word representations. In HLT-NAACL, pages 746– 751. Citeseer. Fabian Pedregosa, Ga¨el Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. The Journal of Machine Learning Research, 12:2825–2830. Roberto Pieraccini, Evelyne Tzoukermann, Zakhar Gorelov, J Gauvain, Esther Levin, Chin-Hui Lee, and Jay G Wilpon. 1992. A speech understanding system based on statistical representation of semantics. In Proceedings of 1992 IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP), volume 1, pages 193–196. IEEE. Steffen Rendle, Christoph Freudenthaler, Zeno Gantner, and Lars Schmidt-Thieme. 2009. BPR: Bayesian personalized ranking from implicit feedback. In Proceedings of the Twenty-Fifth Conference on Uncertainty in Artificial Intelligence, pages 452–461. AUAI Press. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of NAACL-HLT, pages 74–84. Stephanie Seneff. 1992. TINA: A natural language system for spoken language applications. Computational linguistics, 18(1):61–86. Anders Skrondal and Sophia Rabe-Hesketh. 2004. Generalized latent variable modeling: Multilevel, longitudinal, and structural equation models. Crc Press. Richard Socher, John Bauer, Christopher D Manning, and Andrew Y Ng. 2013. Parsing with compositional vector grammars. In Proceedings of the ACL conference. Citeseer. Gokhan Tur, Dilek Z Hakkani-T¨ur, Dustin Hillard, and Asli Celikyilmaz. 2011. Towards unsupervised spoken language understanding: Exploiting query click logs for slot filling. In Proceedings of INTERSPEECH, pages 1293–1296. Gokhan Tur, Minwoo Jeong, Ye-Yi Wang, Dilek Hakkani-T¨ur, and Larry P Heck. 2012. Exploiting the semantic web for unsupervised natural language semantic parsing. In Proceedings of INTERSPEECH. Gokhan Tur, Asli Celikyilmaz, and Dilek HakkaniTur. 2013. Latent semantic modeling for slot filling in conversational understanding. In Proceedings of 2013 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 8307–8311. IEEE. William Yang Wang, Dan Bohus, Ece Kamar, and Eric Horvitz. 2012. Crowdsourcing the acquisition of natural language corpora: Methods and observations. In Proceedings of SLT, pages 73–78. Lu Wang, Dilek Hakkani-T¨ur, and Larry Heck. 2014. Leveraging semantic web search and browse sessions for multi-turn spoken dialog systems. In Proceedings of 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 4082–4086. IEEE. Wen-tau Yih, Xiaodong He, and Christopher Meek. 2014. Semantic parsing for single-relation question answering. In Proceedings of ACL. Steve Young, Milica Gasic, Blaise Thomson, and Jason D Williams. 2013. POMDP-based statistical spoken dialog systems: A review. Proceedings of the IEEE, 101(5):1160–1179. 493 Ke Zhai and Jason D Williams. 2014. Discovering latent structure in task-oriented dialogues. In Proceedings of the Association for Computational Linguistics. 494
2015
47
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 495–503, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Efficient Disfluency Detection with Transition-based Parsing Shuangzhi Wu† , Dongdong Zhang‡ , Ming Zhou‡ , Tiejun Zhao† †Harbin Institute of Technology ‡Microsoft Research {v-shuawu, dozhang, mingzhou}@microsoft.com [email protected] Abstract Automatic speech recognition (ASR) outputs often contain various disfluencies. It is necessary to remove these disfluencies before processing downstream tasks. In this paper, an efficient disfluency detection approach based on right-to-left transitionbased parsing is proposed, which can efficiently identify disfluencies and keep ASR outputs grammatical. Our method exploits a global view to capture long-range dependencies for disfluency detection by integrating a rich set of syntactic and disfluency features with linear complexity. The experimental results show that our method outperforms state-of-the-art work and achieves a 85.1% f-score on the commonly used English Switchboard test set. We also apply our method to in-house annotated Chinese data and achieve a significantly higher f-score compared to the baseline of CRF-based approach. 1 Introduction With the development of the mobile internet, speech inputs have become more and more popular in applications where automatic speech recognition (ASR) is the key component to convert speech into text. ASR outputs often contain various disfluencies which create barriers to subsequent text processing tasks like parsing, machine translation and summarization. Usually, disfluencies can be classified into uncompleted words, filled pauses (e.g. “uh”, “um”), discourse markers (e.g. “I mean”), editing terms (e.g. “you know”) and repairs. To identify and remove disfluencies, straightforward rules can be designed to tackle the former four classes of disfluencies since they often belong to a closed set. However, the repair type disfluency poses particularly more difficult problems as their form is more arbitrary. Typically, as shown in Figure 1, a repair disfluency type consists of a reparandum (“to Boston”) and a filled pause (“um”), followed by its repair (“to Denver”). This special structure of disfluency constraint, which exists in many languages such as English and Chinese, reflects the scenarios of spontaneous speech and conversation, where people often correct preceding words with following words when they find that the preceding words are wrong or improper. This procedure might be interrupted and inserted with filled pauses when people are thinking or hesitating. The challenges of detecting repair disfluencies are that reparandums vary in length, may occur everywhere, and are sometimes nested. I want a flight to Boston um to Denver FP RM RP correct Figure 1: A typical example of repair type disfluency consists of FP (Filled Pause), RM (Reparandum), and RP (Repair). The preceding RM is corrected by the following RP. There are many related works on disfluency detection, that mainly focus on detecting repair type of disfluencies. Straightforwardly, disfluency detection can be treated as a sequence labeling problem and solved by well-known machine learning algorithms such as conditional random fields (CRF) or max-margin markov network (M3N) (Liu et al., 2006; Georgila, 2009; Qian and Liu, 2013), and prosodic features are also concerned in (Kahn et al., 2005; Zhang et al., 2006). These methods achieve good performance, but are not powerful enough to capture complicated disfluencies with longer spans or distances. Recently, syntax-based models such as transitionbased parser have been used for detecting disflu495 encies (Honnibal and Johnson, 2014; Rasooli and Tetreault, 2013). These methods can jointly perform dependency parsing and disfluency detection. But in these methods, great efforts are made to distinguish normal words from disfluent words as decisions cannot be made imminently from left to right, leading to inefficient implementation as well as performance loss. In this paper, we propose detecting disfluencies using a right-to-left transition-based dependency parsing (R2L parsing), where the words are consumed from right to left to build the parsing tree based on which the current word is predicted to be either disfluent or normal. The proposed models cater to the disfluency constraint and integrate a rich set of features extracted from contexts of lexicons and partial syntactic tree structure, where the parsing model and disfluency predicting model are jointly calculated in a cascaded way. As shown in Figure 2(b), while the parsing tree is being built, disfluency tags are predicted and attached to the disfluency nodes. Our models are quite efficient with linear complexity of 2∗N (N is the length of input). was great was great did he did root root N N N N N N X X (a) (b) he Figure 2: An instance of the detection procedure where ‘N’ stands for a normal word and ‘X’ a disfluency word. Words with italic font are Reparandums. (a) is the L2R detecting procedure and (b) is the R2L procedure. Intuitively, compared with previous syntaxbased work such as (Honnibal and Johnson, 2014) that uses left-to-right transition-based parsing (L2R parsing) model, our proposed approach simplifies disfluency detection by sequentially processing each word, without going back to modify the pre-built tree structure of disfluency words. As shown in Figure 2(a), the L2R parsing based joint approach needs to cut the pre-built dependency link between “did” and “he” when “was” is identified as the repair of “did”, which is never needed in our method as Figure 2(b). Furthermore, our method overcomes the deficiency issue in decoding of L2R parsing based joint method, meaning the number of parsing transitions for each hypothesis path is not identical to 2 ∗N, which leads to the failure of performing optimal search during decoding. For example, the involvement of the extra cut operation in Figure 2(a) destroys the competition scoring that accumulates over 2 ∗N transition actions among hypotheses in the standard transition-based parsing. Although the heuristic score, such as the normalization of transition count (Honnibal and Johnson, 2014), can be introduced, the total scores of all hypotheses are still not statistically comparable from a global view. We conduct the experiments on English Switchboard corpus. The results show that our method can achieve a 85.1% f-score with a gain of 0.7 point over state-of-the-art M3N labeling model in (Qian and Liu, 2013) and a gain of 1 point over state-of-the-art joint model proposed in (Honnibal and Johnson, 2014). We also apply our method on Chinese annotated data. As there is no available public data in Chinese, we annotate 25k Chinese sentences manually for training and testing. We achieve 71.2% f-score with 15 points gained compared to the CRF-based baseline, showing that our models are robust and language independent. 2 Transition-based dependency parsing In a typical transition-based parsing, the ShiftReduce decoding algorithm is applied and a queue and stack are maintained (Zhang and Clark, 2008). The queue stores the stream of the input and the front of the queue is indexed as the current word. The stack stores the unfinished words which may be linked to the current word or a future word in the queue. When words in the queue are consumed in sequential order, a set of transition actions is applied to build a parsing tree. There are four kinds of transition actions in the parsing process (Zhang and Clark, 2008), as described below. • Shift : Removes the front of the queue and pushes it to the stack. • Reduce : Pops the top of the stack. • LeftArc : Pops the top of the stack, and links the popped word to the front of the queue. • RightArc : Links the front of the queue to the top of the stack and, removes the front of the queue and pushes it to the stack. The choice of each transition action during parsing is scored by a generalized perceptron (Collins, 496 2002) which can be trained over a rich set of nonlocal features. In decoding, beam search is performed to search the optimal sequence of transition actions. As each word must be pushed to the stack once and popped off once, the number of actions needed to parse a sentence is always 2 ∗N, where N is the length of the sentence. Transition-based dependency parsing (Zhang and Clark, 2008) can be performed in either a leftto-right or a right-to-left way, both of which have a performance that is comparable as illustrated in Section 4. However, when they are applied to disfluency detection, their behaviors are very different due to the disfluency structure constraint. We prove that right-to-left transition-based parsing is more efficient than left-to-right transition-based parsing for disfluency detection. 3 Our method 3.1 Model Unlike previous joint methods (Honnibal and Johnson, 2014; Rasooli and Tetreault, 2013), we introduce dependency parsing into disfluency detection from theory. In the task of disfluency detection, we are given a stream of unstructured words from automatic speech recognition (ASR). We denote the word sequence with W n 1 := w1, w2,w3,...,wn, which is actually the inverse order of ASR words that should be wn, wn−1,wn−2,...,w1. The output of the task is a sequence of binary tags denoted as Dn 1 = d1, d2,d3,...,dn, where each di corresponds to wi, indicating whether wi is a disfluency word (X) or not (N).1 Our task can be modeled as formula (1), which is to search the best sequence D∗given the stream of words W n 1 . D∗= argmaxDP(Dn 1 |W n 1 ) (1) The dependency parsing tree is introduced into model (1) to guide detection. The rewritten formula is shown below: D∗= argmaxD X T P(Dn 1 , T|W n 1 ) (2) We jointly optimize disfluency detection and parsing with form (3), rather than considering all possible parsing trees: (D∗, T ∗) = argmax(D,T )P(Dn 1 , T|W n 1 ) (3) 1We just use tag ’N’ to represent a normal word, in practice normal words will not be tagged anything by default. As both the dependency tree and the disfluency tags are generated word by word, we decompose formula (3) into: (D∗, T ∗) = argmax(D,T ) n Y i=1 P(di, T i 1|W i 1, T i−1 1 ) (4) where T i 1 is the partial tree after word wi is consumed, di is the disfluency tag of wi. We simplify the joint optimization in a cascaded way with two different forms (5) and (6). (D∗, T ∗) = argmax(D,T ) n Y i=1 P(T i 1|W i 1, T i−1 1 ) × P(di|W i 1, T i 1) (5) (D∗, T ∗) = argmax(D,T ) n Y i=1 P(di|W i 1, T i−1 1 ) × P(T i 1|W i 1, T i−1 1 , di) (6) Here, P(T i 1|.) is the parsing model, and P(di|.) is the disfluency model used to predict the disluency tags on condition of the contexts of partial trees that have been built. In (5), the parsing model is calculated first, followed by the calculation of the disfluency model. Inspired by (Zhang et al., 2013), we associate the disfluency tags to the transition actions so that the calculation of P(di|W i 1, T i 1) can be omitted as di can be inferred from the partial tree T i 1. We then get (D∗, T ∗) = argmax(D,T ) n Y i=1 P(di, T i 1|W i 1, T i−1 1 ) (7) Where the parsing and disfluency detection are unified into one model. We refer to this model as the Unified Transition(UT) model. While in (6), the disfluency model is calculated first, followed by the calculation of the parsing model. We model P(di|.) as a binary classifier to classify whether a word is disfluent or not. We refer to this model as the binary classifier transition (BCT) model. 3.2 Unified transition-based model (UT) In model (7), in addition to the standard 4 transition actions mentioned in Section 2, the UT model 497 adds 2 new transition actions which extend the original Shift and RightArc transitions as shown below: • Dis Shift: Performs what Shift does then marks the pushed word as disfluent. • Dis RightArc: Adds a virtual link from the front of the queue to the top of the stack which is similar to Right Arc, marking the front of the queue as disfluenct and pushing it to the stack. Figure 3 shows an example of how the UT model works. Given an input “he did great was great”, the optimal parsing tree is predicted by the UT model. According to the parsing tree, we can get the disfluency tags “N X X N N” which have been attached to each word. To ensure the normal words are built grammatical in the parse tree, we apply a constraint to the UT model. UT model constraint: When a word is marked disfluent, all the words in its left and right subtrees will be marked disfluent and all the links of its descendent offsprings will be converted to virtual links, no matter what actions are applied to these words. For example, the italic word “great” will be marked disfluent, no matter what actions are performed on it. was great did he root N N X N great X Figure 3: An example of UT model, where ‘N’ means the word is a fluent word and ‘X’ means it is disfluent. Words with italic font are Reparandums. 3.3 A binary classifier transition-based model (BCT) In model (6), we perform the binary classifier and the parsing model together by augmenting the Shift-Reduce algorithm with a binary classifier transition(BCT) action: • BCT : Classifies whether the current word is disfluent or not. If it is, remove it from the queue, push it into the stack which is similar to Shift and then mark it as disfluent, otherwise the original transition actions will be used. It is noted that when BCT is performed, the next action must be Reduce. This constraint guarantees that any disfluent word will not have any descendent offspring. Figure 2(b) shows an example of the BCT model. When the partial tree “great was” is built, the next word “did” is obviously disfluent. Unlike UT model, the BCT will not link the word “did” to any word. Instead only a virtual link will add it to the virtual root. 3.4 Training and decoding In practice, we use the same linear model for both models (6) and (7) to score a parsing tree as: Score(T) = X action φ(action) · ⃗λ Where φ(action) is the feature vector extracted from partial hypothesis T for a certain action and ⃗λ is the weight vector. φ(action)·⃗λ calculates the score of a certain transition action. The score of a parsing tree T is the sum of action scores. In addition to the basic features introduced in (Zhang and Nivre, 2011) that are defined over bag of words and POS-tags as well as tree-based context, our models also integrate three classes of new features combined with Brown cluster features (Brown et al., 1992) that relate to the rightto-left transition-based parsing procedure as detailed below. Simple repetition function • δI(a, b): A logic function which indicates whether a and b are identical. Syntax-based repetition function • δL(a, b): A logic function which indicates whether a is a left child of b. • δR(a, b): A logic function which indicates whether a is a right child of b. Longest subtree similarity function • NI(a, b): The count of identical children on the left side of the root node between subtrees rooted at a and b. • N#(a0..n, b): The count of words among a0 .. an that are on the right of the subtree rooted at b. 498 Table 1 summarizes the features we use in the model computation, where ws denotes the top word of the stack, w0 denotes the front word of the queue and w0..2 denotes the top three words of the queue. Every pi corresponds to the POS-tag of wi and p0..2 represents the POS-tags of w0..2. In addition, wic means the Brown cluster of wi. With these symbols, several new feature templates are defined in Table 1. Both our models have the same feature templates. Basic features All templates in (Zhang and Nivre, 2011) New disfluency features Function unigrams δI(ws, w0);δI(ps, p0); δL(w0, ws);δL(p0, ps); δR(w0, ws);δR(p0, ps); NI(w0, ws);NI(p0, ps); N#(w0..2, ws);N#(p0..2, ps); Function bigrams δI(ws, w0)δI(ps, p0); δL(w0, ws)δL(p0, ps); δR(w0, ws)δR(p0, ps); NI(w0, ws)NI(p0, ps); N#(w0..2, ws)N#(p0..2, ps); δI(ws, w0)wsc; δI(ws, w0)w0c; Function trigrams wsw0δI(ws, w0); wsw0δI(ps, p0); Table 1: Feature templates designed for disfluency detection and dependency parsing. Similar to the work in (Zhang and Clark, 2008; Zhang and Nivre, 2011), we train our models by averaged perceptron (Collins, 2002). In decoding, beam search is performed to get the optimal parsing tree as well as the tag sequence. 4 Experiments 4.1 Experimental setup Our training data is the Switchboard portion of the English Penn Treebank (Marcus et al., 1993) corpus, which consists of telephone conversations about assigned topics. As not all the Switchboard data has syntactic bracketing, we only use the subcorpus of PAESED/MRG/SWBD. Following the experiment settings in (Charniak and Johnson, 2001), the training subcorpus contains directories 2 and 3 in PAESED/MRG/SWBD and directory 4 is split into test and development sets. We use the Stanford dependency converter (De Marneffe et al., 2006) to get the dependency structure from the Switchboard corpus, as Honnibal and Johnson (2014) prove that Stanford converter is robust to the Switchboard data. For our Chinese experiments, no public Chinese corpus is available. We annotate about 25k spoken sentences with only disfluency annotations according to the guideline proposed by Meteer et al. (1995). In order to generate similar data format as English Switchboard corpus, we use Chinese dependency parsing trained on the Chinese Treebank corpus to parse the annotated data and use these parsed data for training and testing . For our Chinese experiment setting, we respectively select about 2k sentences for development and testing. The rest are used for training. To train the UT model, we create data format adaptation by replacing the original Shift and RightArc of disfluent words with Dis Shift and Dis RightArc, since they are just extensions of Shift and RightArc. For the BCT model, disfluent words are directly depended to the root node and all their links and labels are removed. We then link all the fluent children of disfluent words to parents of disfluent words. We also remove partial words and punctuation from data to simulate speech recognizer results where such information is not available (Johnson and Charniak, 2004). Additionally, following Honnibal and Johnson (2014), we remove all one token sentences as these sentences are trivial for disfluency detection, then lowercase the text and discard filled pauses like “um” and “uh”. The evaluation metrics of disfluency detection are precision (Prec.), recall (Rec.) and f-score (F1). For parsing accuracy metrics, we use unlabeled attachment score (UAS) and labeled attachment score (LAS). For our primary comparison, we evaluate the widely used CRF labeling model, the state-of-the-art M3N model presented by Qian and Liu (2013) which has been commonly used as baseline in previous works and the state-of-the-art L2R parsing based joint model proposed by Honnibal and Johnson (2014). 4.2 Experimental results 4.2.1 Performance of disfluency detection on English Swtichboard corpus The evaluation results of both disfluency detection and parsing accuracy are presented in Table 2. The accuracy of M3N directly refers to the re499 Disfluency detection accuracy Parsing accuracy Method Prec. Rec. F1 UAS LAS CRF(BOW) 81.2% 44.9% 57.8% 88.7% 84.7% CRF(BOW+POS) 88.3% 62.2% 73.1% 89.2% 85.6% M3N N/A N/A 84.1% N/A N/A M3N† 90.5% 79.1% 84.4% 91% 88.2% H&J N/A N/A 84.1% 90.5% N/A UT(basic features) 86% 72.5% 78.7% 91.9% 89.0% UT(+new features) 88.8% 75.1% 81.3% 92.1% 89.4% BCT(basic features) 88.2% 77.9% 82.7% 92.1% 89.3% BCT(+new features) 90.3% 80.5% 85.1% 92.2% 89.6% Table 2: Disfluency detection and parsing accuracies on English Switchboard data. The accuracy of M3N refers to the result reported in (Qian and Liu, 2013). H&J is the L2R parsing based joint model in (Honnibal and Johnson, 2014). The results of M3N† come from the experiments with toolkit released by Qian and Liu (2013) on our pre-processed corpus. sults reported in (Qian and Liu, 2013). The results of M3N† come from our experiments with the toolkit2 released by Qian and Liu (2013) which uses our data set with the same pre-processing. It is comparable between our models and the L2R parsing based joint model presented by Honnibal and Johnson (2014), as we all conduct experiments on the same pre-processed data set. In order to compare parsing accuracy, we use the CRF and M3N† model to pre-process the test set by removing all the detected disfluencies, then evaluate the parsing performance on the processed set. From the table, our BCT model with new disfluency features achieves the best performance on disfluency detection as well as dependency parsing. The performance of the CRF model is low, because the local features are not powerful enough to capture long span disfluencies. Our main comparison is with the M3N† labeling model and the L2R parsing based model by Honnibal and Johnson (2014). As illustrated in Table 2, the BCT model outperforms the M3N† model (we got an accuracy of 84.4%, though 84.1% was reported in their paper) and the L2R parsing based model respectively by 0.7 point and 1 point on disfluency detection, which shows our method can efficiently tackle disfluencies. This is because our method can cater extremely well to the disfluency constraint and perform optimal search with identical transition counts over all hypotheses in beam search. Furthermore, our global syntactic and dis2The toolkit is available at https://code.google.com/p/disfluency-detection/downloads. fluency features can help capture long-range dependencies for disfluency detection. However, the UT model does not perform as well as BCT. This is because the UT model suffers from the risk that normal words may be linked to disfluencies which may bring error propagation in decoding. In addition our models with only basic features respectively score about 3 points below the models adding new features, which shows that these features are important for disfluency detection. In comparing parsing accuracy, our BCT model outperforms all the other models, showing that this model is more robust on disfluent parsing. 4.2.2 Performance of disfluency detection on different part-of-speeches In this section, we further analyze the frequency of different part-of-speeches in disfluencies and test the performance on different part-of-speeches. Five classes of words take up more than 73% of all disfluencies as shown in Table 3, which are pronouns (contain PRP and PRP$), verbs (contain VB,VBD,VBP,VBZ,VBN), determiners (contain DT), prepositions (contain IN) and conjunctions (contain CC). Obviously, these classes of words appear frequently in our communication. Pron. Verb Dete. Prep. Conj. Others Dist. 30.2% 14.7% 13% 8.7% 6.7% 26.7% Table 3: Distribution of different part-ofspeeches in disfluencies. Conj.=conjunction; Dete.=determiner; Pron.=pronoun; Prep.= preposition. 500 Table 4 illustrates the performance (f-score) on these classes of words. The results of L2R parsing based joint model in (Honnibal and Johnson, 2014) are not listed because we cannot get such detailed data. CRF (BOW) CRF (BOW +POS) M3N† UT (+feat.) BCT (+feat.) Pron. 73.9% 85% 92% 91.5% 93.8% Verb 38.2% 64.8% 84.2% 82.3% 84.7% Dete. 66.8% 80% 88% 83.7% 87% Prep. 60% 71.5% 79.1% 76.1% 79.3% Conj. 75.2% 82.2% 81.6% 79.5% 83.2% Others 43.2% 61% 78.4% 72.3% 79.1% Table 4: Performance on different classes of words. Dete.=determiner; Pron.=pronoun; Conj.=conjunction; Prep.= preposition. feat.=new disfluency features As shown in Table 4, our BCT model outperforms all other models except that the performance on determiner is lower than M3N†, which shows that our algorithm can significantly tackle common disfluencies. 4.2.3 Performance of disfluency detection on Chinese annotated corpus In addition to English experiments, we also apply our method on Chinese annotated data. As there is no standard Chinese corpus, no Chinese experimental results are reported in (Honnibal and Johnson, 2014; Qian and Liu, 2013). We only use the CRF-based labeling model with lexical and POStag features as baselines. Table 5 shows the results of Chinese disfluency detection. Model Prec. Rec. F1 CRF(BOW) 89.5% 35.6% 50.9% CRF(BOW+POS) 83.4% 41.6% 55.5% UT(+new features) 86.7% 59.5% 70.6% BCT(+new features) 85.5% 61% 71.2% Table 5: Disfluency detection performance on Chinese annotated data. Our models outperform the CRF model with bag of words and POS-tag features by more than 15 points on f-score which shows that our method is more effective. As shown latter in 4.2.4, the standard transition-based parsing is not robust in parsing disfluent text. There are a lot of parsing errors in Chinese training data. Even though we are still able to get promising results with less data and un-golden parsing annotations. We believe that if we were to have the golden Chinese syntactic annotations and more data, we would get much better results. 4.2.4 Performance of transition-based parsing In order to show whether the advantage of the BCT model is caused by the disfluency constraint or the difference between R2L and L2R parsing models, in this section, we make a comparison between the original left-to-right transition-based parsing and right-to-left parsing. These experiments are performed with the Penn Treebank (PTB) Wall Street Journal (WSJ) corpus. We follow the standard approach to split the corpus as 2-21 for training, 22 for development and section 23 for testing (McDonald et al., 2005). The features for the two parsers are basic features in Table 1. The POStagger model that we implement for a pre-process before parsing also uses structured perceptron for training and can achieve a competitive accuracy of 96.7%. The beam size for both POS-tagger and parsing is set to 5. Table 6 presents the results on WSJ test set and Switchboard (SWBD) test set. Data sets Model UAS LAS WSJ L2R Parsing 92.1% 89.8% R2L Parsing 92.0% 89.6% SWBD L2R Parsing 88.4% 84.4% R2L Parsing 88.7% 84.8% Table 6: Performance of our parsers on different test sets. The parsing accuracy on SWBD is lower than WSJ which means that the parsers are more robust on written text data. The performances of R2L and L2R parsing are comparable on both SWBD and WSJ test sets. This demonstrates that the effectiveness of our disfluency detection model mainly relies on catering to the disfluency constraint by using R2L parsing based approach, instead of the difference in parsing models between L2R and R2L parsings. 5 Related work In practice, disfluency detection has been extensively studied in both speech processing field and natural language processing field. Noisy channel models have been widely used in the past to detect 501 disfluencies. Johnson and Charniak (2004) proposed a TAG-based noisy channel model where the TAG model was used to find rough copies. Thereafter, a language model and MaxEnt reranker were added to the noisy channel model by Johnson et al. (2004). Following their framework, Zwarts and Johnson (2011) extended this model using minimal expected f-loss oriented nbest reranking with additional corpus for language model training. Recently, the max-margin markov networks (M3N) based model has achieved great improvement in this task. Qian and Liu (2013) presented a multi-step learning method using weighted M3N model for disfluency detection. They showed that M3N model outperformed many other labeling models such as CRF model. Following this work, Wang et al. (2014) used a beam-search decoder to combine multiple models such as M3N and language model, they achieved the highest f-score. However, direct comparison with their work is difficult as they utilized the whole SWBD data while we only use the subcorpus with syntactic annotation which is only half the SWBD corpus and they also used extra corpus for language model training. Additionally, syntax-based approaches have been proposed which concern parsing and disfluency detection together. Lease and Johnson (2006) involved disfluency detection in a PCFG parser to parse the input along with detecting disfluencies. Miller and Schuler (2008) used a right corner transform of syntax trees to produce a syntactic tree with speech repairs. But their performance was not as good as labeling models. There exist two methods published recently which are similar to ours. Rasooli and Tetreault (2013) designed a joint model for both disfluency detection and dependency parsing. They regarded the two tasks as a two step classifications. Honnibal and Johnson (2014) presented a new joint model by extending the original transition actions with a new “Edit” transition. They achieved the state-of-theart performance on both disfluency detection and parsing. But this model suffers from the problem that the number of transition actions is not identical for different hypotheses in decoding, leading to the failure of performing optimal search. In contrast, our novel right-to-left transition-based joint method caters to the disfluency constraint which can not only overcome the decoding deficiency in previous work but also achieve significantly higher performance on disfluency detection as well as dependency parsing. 6 Conclusion and Future Work In this paper, we propose a novel approach for disfluency detection. Our models jointly perform parsing and disfluency detection from right to left by integrating a rich set of disfluency features which can yield parsing structure and difluency tags at the same time with linear complexity. The algorithm is easy to implement without complicated backtrack operations. Experiential results show that our approach outperforms the baselines on the English Switchboard corpus and experiments on the Chinese annotated corpus also show the language independent nature of our method. The state-of-the-art performance on disfluency detection and dependency parsing can benefit the downstream tasks of text processing. In future work, we will try to add new classes of features to further improve performance by capturing the property of disfluencies. We would also like to make an end-to-end MT test over transcribed speech texts with disfluencies removed based on the method proposed in this paper. Acknowledgments We are grateful to the anonymous reviewers for their insightful comments. We also thank Mu Li, Shujie Liu, Lei Cui and Nan Yang for the helpful discussions. References Peter F Brown, Peter V Desouza, Robert L Mercer, Vincent J Della Pietra, and Jenifer C Lai. 1992. Class-based n-gram models of natural language. Computational linguistics, 18(4):467–479. Eugene Charniak and Mark Johnson. 2001. Edit detection and parsing for transcribed speech. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–9. Association for Computational Linguistics. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natural language processing-Volume 10, pages 1–8. Association for Computational Linguistics. 502 Marie-Catherine De Marneffe, Bill MacCartney, Christopher D Manning, et al. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of LREC, volume 6, pages 449–454. Kallirroi Georgila. 2009. Using integer linear programming for detecting speech disfluencies. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers, pages 109–112. Association for Computational Linguistics. Matthew Honnibal and Mark Johnson. 2014. Joint incremental disfluency detection and dependency parsing. Transactions of the Association for Computational Linguistics, 2:131–142. Mark Johnson and Eugene Charniak. 2004. A tagbased noisy channel model of speech repairs. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, page 33. Association for Computational Linguistics. Mark Johnson, Eugene Charniak, and Matthew Lease. 2004. An improved model for recognizing disfluencies in conversational speech. In Proceedings of Rich Transcription Workshop. Jeremy G Kahn, Matthew Lease, Eugene Charniak, Mark Johnson, and Mari Ostendorf. 2005. Effective use of prosody in parsing conversational speech. In Proceedings of the conference on human language technology and empirical methods in natural language processing, pages 233–240. Association for Computational Linguistics. Matthew Lease and Mark Johnson. 2006. Early deletion of fillers in processing conversational speech. In Proceedings of the Human Language Technology Conference of the NAACL, Companion Volume: Short Papers, pages 73–76. Association for Computational Linguistics. Yang Liu, Elizabeth Shriberg, Andreas Stolcke, Dustin Hillard, Mari Ostendorf, and Mary Harper. 2006. Enriching speech recognition with automatic detection of sentence boundaries and disfluencies. Audio, Speech, and Language Processing, IEEE Transactions on, 14(5):1526–1540. Mitchell P Marcus, Mary Ann Marcinkiewicz, and Beatrice Santorini. 1993. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313–330. Ryan McDonald, Koby Crammer, and Fernando Pereira. 2005. Online large-margin training of dependency parsers. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 91–98. Association for Computational Linguistics. Marie W Meteer, Ann A Taylor, Robert MacIntyre, and Rukmini Iyer. 1995. Dysfluency annotation stylebook for the switchboard corpus. University of Pennsylvania. Tim Miller and William Schuler. 2008. A unified syntactic model for parsing fluent and disfluent speech. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, pages 105– 108. Association for Computational Linguistics. Xian Qian and Yang Liu. 2013. Disfluency detection using multi-step stacked learning. In HLT-NAACL, pages 820–825. Mohammad Sadegh Rasooli and Joel R Tetreault. 2013. Joint parsing and disfluency detection in linear time. In EMNLP, pages 124–129. Xuancong Wang, Hwee Tou Ng, and Khe Chai Sim. 2014. A beam-search decoder for disfluency detection. In Proc. of COLING. Yue Zhang and Stephen Clark. 2008. A tale of two parsers: investigating and combining graphbased and transition-based dependency parsing using beam-search. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 562–571. Association for Computational Linguistics. Yue Zhang and Joakim Nivre. 2011. Transition-based dependency parsing with rich non-local features. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 188–193. Association for Computational Linguistics. Qi Zhang, Fuliang Weng, and Zhe Feng. 2006. A progressive feature selection algorithm for ultra large feature spaces. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 561–568. Association for Computational Linguistics. Dongdong Zhang, Shuangzhi Wu, Nan Yang, and Mu Li. 2013. Punctuation prediction with transition-based parsing. In ACL (1), pages 752– 760. Simon Zwarts and Mark Johnson. 2011. The impact of language models and loss functions on repair disfluency detection. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 703–711. Association for Computational Linguistics. 503
2015
48
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 504–513, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics S-MART: Novel Tree-based Structured Learning Algorithms Applied to Tweet Entity Linking Yi Yang School of Interactive Computing Georgia Institute of Technology [email protected] Ming-Wei Chang Microsoft Research [email protected] Abstract Non-linear models recently receive a lot of attention as people are starting to discover the power of statistical and embedding features. However, tree-based models are seldom studied in the context of structured learning despite their recent success on various classification and ranking tasks. In this paper, we propose S-MART, a tree-based structured learning framework based on multiple additive regression trees. S-MART is especially suitable for handling tasks with dense features, and can be used to learn many different structures under various loss functions. We apply S-MART to the task of tweet entity linking — a core component of tweet information extraction, which aims to identify and link name mentions to entities in a knowledge base. A novel inference algorithm is proposed to handle the special structure of the task. The experimental results show that S-MART significantly outperforms state-of-the-art tweet entity linking systems. 1 Introduction Many natural language processing (NLP) problems can be formalized as structured prediction tasks. Standard algorithms for structured learning include Conditional Random Field (CRF) (Lafferty et al., 2001) and Structured Supported Vector Machine (SSVM) (Tsochantaridis et al., 2004). These algorithms, usually equipped with a linear model and sparse lexical features, achieve stateof-the-art performances in many NLP applications such as part-of-speech tagging, named entity recognition and dependency parsing. This classical combination of linear models and sparse features is challenged by the recent emerging usage of dense features such as statistical and embedding features. Tasks with these low dimensional dense features require models to be more sophisticated to capture the relationships between features. Therefore, non-linear models start to receive more attention as they are often more expressive than linear models. Tree-based models such as boosted trees (Friedman, 2001) are flexible non-linear models. They can handle categorical features and count data better than other non-linear models like Neural Networks. Unfortunately, to the best of our knowledge, little work has utilized tree-based methods for structured prediction, with the exception of TreeCRF (Dietterich et al., 2004). In this paper, we propose a novel structured learning framework called S-MART (Structured Multiple Additive Regression Trees). Unlike TreeCRF, S-MART is very versatile, as it can be applied to tasks beyond sequence tagging and can be trained under various objective functions. SMART is also powerful, as the high order relationships between features can be captured by nonlinear regression trees. We further demonstrate how S-MART can be applied to tweet entity linking, an important and challenging task underlying many applications including product feedback (Asur and Huberman, 2010) and topic detection and tracking (Mathioudakis and Koudas, 2010). We apply S-MART to entity linking using a simple logistic function as the loss function and propose a novel inference algorithm to prevent overlaps between entities. Our contributions are summarized as follows: • We propose a novel structured learning framework called S-MART. S-MART combines non-linearity and efficiency of treebased models with structured prediction, leading to a family of new algorithms. (Section 2) 504 • We apply S-MART to tweet entity linking. Building on top of S-MART, we propose a novel inference algorithm for nonoverlapping structure with the goal of preventing conflicting entity assignments. (Section 3) • We provide a systematic study of evaluation criteria in tweet entity linking by conducting extensive experiments over major data sets. The results show that S-MART significantly outperforms state-of-the-art entity linking systems, including the system that is used to win the NEEL 2014 challenge (Cano and others, 2014). (Section 4) 2 Structured Multiple Additive Regression Trees The goal of a structured learning algorithm is to learn a joint scoring function S between an input x and an output structure y, S : (x, y) →R. The structured output y often contains many interdependent variables, and the number of the possible structures can be exponentially large with respect to the size of x. At test time, the prediction y for x is obtained by arg max y∈Gen(x) S(x, y), where Gen(x) represents the set of all valid output structures for x. Standard learning algorithms often directly optimize the model parameters. For example, assume that the joint scoring function S is parameterized by θ. Then, gradient descent algorithms can be used to optimize the model parameters θ iteratively. More specifically, θm = θm−1 −ηm ∂L(y∗, S(x, y; θ)) ∂θm−1 , (1) where y∗is the gold structure, L(y∗, S(x, y; θ)) is a loss function and ηm is the learning rate of the m-th iteration. In this paper, we propose a framework called Structured Multiple Additive Regression Trees (S-MART), which generalizes Multiple Additive Regression Trees (MART) for structured learning problems. Different from Equation (1), SMART does not directly optimize the model parameters; instead, it approximates the optimal scoring function that minimize the loss by adding (weighted) regression tree models iteratively. Due to the fact that there are exponentially many input-output pairs in the training data, S-MART assumes that the joint scoring function can be decomposed as S(x, y) = X k∈Ω(x) F(x, yk), where Ω(x) contains the set of the all factors for input x and yk is the sub-structure of y that corresponds to the k-th factor in Ω(x). For instance, in the task of word alignment, each factor can be defined as a pair of words from source and target languages respectively. Note that we can recover y from the union of {yk}K 1 . The factor scoring function F(x, yk) can be optimized by performing gradient descent in the function space in the following manner: Fm(x, yk) = Fm−1(x, yk) −ηmgm(x, yk) (2) where function gm(x, yk) is the functional gradient. Note that gm is a function rather than a vector. Therefore, modeling gm theoretically requires an infinite number of data points. We can address this difficulty by approximating gm with a finite number of point-wise functional gradients gm(x, yk = uk) = (3) ∂L(y∗, S(x, yk = uk)) ∂F(x, yk = uk)  F(x,yk)=Fm−1(x,yk) where uk index a valid sub-structure for the k-th factor of x. The key point of S-MART is that it approximates −gm by modeling the point-wise negative functional gradients using a regression tree hm. Then the factor scoring function can be obtained by F(x, yk) = M X m=1 ηmhm(x, yk), where hm(x, yk) is also called a basis function and ηm can be simply set to 1 (Murphy, 2012). The detailed S-MART algorithm is presented in Algorithm 1. The factor scoring function F(x, yk) is simply initialized to zero at first (line 1). After this, we iteratively update the function by adding regression trees. Note that the scoring function is shared by all the factors. Specifically, given the current decision function Fm−1, we can consider line 3 to line 9 a process of generating the pseudo 505 Algorithm 1 S-MART: A family of structured learning algorithms with multiple additive regression trees 1: F0(x, yk) = 0 2: for m = 1 to M do: ▷going over all trees 3: D ←∅ 4: for all examples do: ▷going over all examples 5: for yk ∈Ω(x) do: ▷going over all factors 6: For all uk, obtain gku by Equation (3) 7: D ←D ∪{(Φ(x, yk = uk), −gku)} 8: end for 9: end for 10: hm(x, yk) ←TrainRegressionTree(D) 11: Fm(x, yk) = Fm−1(x, yk) + hm(x, yk) 12: end for training data D for modeling the regression tree. For each training example, S-MART first computes the point-wise functional gradients according to Equation (3) (line 6). Here we use gku as the abbreviation for gm(x, yk = uk). In line 7, for each sub-structure uk, we create a new training example for the regression problem by the feature vector Φ(x, yk = uk) and the negative gradient −gku. In line 10, a regression tree is constructed by minimizing differences between the prediction values and the point-wise negative gradients. Then a new basis function (modeled by a regression tree) will be added into the overall F (line 11). It is crucial to note that S-MART is a family of algorithms rather than a single algorithm. S-MART is flexible in the choice of the loss functions. For example, we can use either logistic loss or hinge loss, which means that SMART can train probabilistic models as well as non-probabilistic ones. Depending on the choice of factors, S-MART can handle various structures such as linear chains, trees, and even the semiMarkov chain (Sarawagi and Cohen, 2004). S-MART versus MART There are two key differences between S-MART and MART. First, S-MART decomposes the joint scoring function S(x, y) into factors to address the problem of the exploding number of input-output pairs for structured learning problems. Second, S-MART models a single scoring function F(x, yk) over inputs and output variables directly rather than O different functions F o(x), each of which corresponds to a label class. S-MART versus TreeCRF TreeCRF can be viewed as a special case of S-MART, and there are two points where S-MART improves upon TreeCRF. First, the model designed in (Dietterich et al., 2004) is tailored for sequence tagging problems. Similar to MART, for a tagging task with O tags, they choose to model O functions F o(x, o′) instead of directly modeling the joint score of the factor. This imposes limitations on the feature functions, and TreeCRF is consequently unsuitable for many tasks such as entity linking.1Second, S-MART is more general in terms of the objective functions and applicable structures. In the next section, we will see how S-MART can be applied to a non-linear-chain structure and various loss functions. 3 S-MART for Tweet Entity Linking We first formally define the task of tweet entity linking. As input, we are given a tweet, an entity database (e.g., Wikipedia where each article is an entity), and a lexicon2 which maps a surface form into a set of entity candidates. For each incoming tweet, all n-grams of this tweet will be used to find matches in the lexicon, and each match will form a mention candidate. As output, we map every mention candidate (e.g., “new york giants”) in the message to an entity (e.g., NEW YORK GIANTS) or to Nil (i.e., a non-entity). A mention candidate can often potentially link to multiple entities, which we call possible entity assignments. This task is a structured learning problem, as the final entity assignments of a tweet should not overlap with each other.3 We decompose this learning problem as follows: we make each mention candidate a factor, and the score of the entity assignments of a tweet is the sum of the score of each entity and mention candidate pair. Although all mention candidates are decomposed, the nonoverlapping constraint requires the system to perform global inference. Consider the example tweet in Figure 1, where we show the tweet with the mention candidates in brackets. To link the mention candidate “new york giants” to a non-Nil entity, the system has to link previous overlapping mention candidates to Nil. It is important to note that this is not a linear chain problem because of the non-overlapping constraint, and the inference algorithm needs to be 1For example, entity linking systems need to model the similarity between an entity and the document. The TreeCRF formulation does not support such features. 2We use the standard techniques to construct the lexicon from anchor texts, redirect pages and other information resources. 3We follow the common practice and do not allow embedded entities. 506 Figure 1: Example tweet and its mention candidates. Each mention candidate is marked as a pair of brackets in the original tweet and forms a column in the graph. The graph demonstrates the non-overlapping constraint. To link the mention candidate “new york giants” to a non-Nil entity, the system has to link previous four overlapping mention candidates to Nil. The mention candidate “eli manning” is not affected by “new york giants”. Note that this is not a standard linear chain problem. carefully designed. 3.1 Applying S-MART We derive specific model for tweet entity linking task with S-MART and use logistic loss as our running example. The hinge loss version of the model can be derived in a similar way. Note that the tweet and the mention candidates are given. Let x be the tweet, uk be the entity assignment of the k-th mention candidate. We use function F(x, yk = uk) to model the score of the k-th mention candidate choosing entity uk.4 The overall scoring function can be decomposed as follows: S(x, y = {uk}K k=1) = K X k=1 F(x, yk = uk) S-MART utilizes regression trees to model the scoring function F(x, yk = uk), which requires point-wise functional gradient for each entity of every mention candidate. Let’s first write down the logistic loss function as L(y∗, S(x, y)) = −log P(y∗|x) = log Z(x) −S(x, y∗) where Z(x) = P y exp(S(x, y)) is the potential function. Then the point-wise gradients can be computed as gku = ∂L ∂F(x, yk = uk) = P(yk = uk|x) −1[y∗ k = uk], where 1[·] represents an indicator function. The conditional probability P(yk = uk|x) can be computed by a variant of the forward-backward algorithm, which we will detail in the next subsection. 4Note that each mention candidate has different own entity sets. 3.2 Inference The non-overlapping structure is distinct from linear chain and semi-Markov chain (Sarawagi and Cohen, 2004) structures. Hence, we propose a carefully designed forward-backward algorithm to calculate P(yk = uk|x) based on current scoring function F(x, yk = uk) given by the regression trees. The non-overlapping constraint distinguishes our inference algorithm from other forward-backward variants. To compute the forward probability, we sort5 the mention candidates by their end indices and define forward recursion by α(u1, 1) = exp(F(x, y1 = u1)) α(uk, k) = exp(F(x, yk = uk)) · P−1 Y p=1 exp(F(x, yk−p = Nil)) · X uk−P α(uk−P , k −P) (4) where k −P is the index of the previous nonoverlapping mention candidate. Intuitively, for the k-th mention candidate, we need to identify its nearest non-overlapping fellow and recursively compute the probability. The overlapping mention candidates can only take the Nil entity. Similarly, we can sort the mention candidates by their start indices and define backward recur5Sorting helps the algorithms find non-overlapping candidates. 507 sion by β(uK, K) =1 β(uk, k) = X uk+Q exp(F(x, yk+Q = uk+Q)) · Q−1 Y q=1 exp(F(x, yk+q = Nil)) · β(uk+Q, k + Q) (5) where k + Q is the index of the next nonoverlapping mention candidate. Note that the third terms of equation (4) or (5) will vanish if there are no corresponding non-overlapping mention candidates. Given the potential function can be computed by Z(x) = P uk α(uk, k)β(uk, k), for entities that are not Nil, P(yk = uk|x) =exp(F(x, yk = uk)) · β(uk, k) Z(x) · P−1 Y p=1 exp(F(x, yk−p = Nil)) · X uk−P α(uk−P , k −P) (6) The probability for the special token Nil can be obtained by P(yk = Nil|x) = 1 − X uk̸=Nil P(yk = uk|x) (7) In the worst case, the total cost of the forwardbackward algorithm is O(max{TK, K2}), where T is the number of entities of a mention candidate.6 Finally, at test time, the decoding problem arg maxy S(x, y) can be solved by a variant of the Viterbi algorithm. 3.3 Beyond S-MART: Modeling entity-entity relationships It is important for entity linking systems to take advantage of the entity-to-entity information while making local decisions. For instance, the identification of entity “eli manning” leads to a strong clue for linking “new york giants” to the NFL team. Instead of defining a more complicated structure and learning everything jointly, we employ a 6The cost is O(K2) only if every mention candidate of the tweet overlaps other mention candidates. In practice, the algorithm is nearly linear w.r.t K. two-stage approach as the solution for modeling entity-entity relationships after we found that SMART achieves high precision and reasonable recall. Specifically, in the first stage, the system identifies all possible entities with basic features, which enables the extraction of entity-entity features. In the second stage, we re-train S-MART on a union of basic features and entity-entity features. We define entity-entity features based on the Jaccard distance introduced by Guo et al. (2013). Let Γ(ei) denotes the set of Wikipedia pages that contain a hyperlink to an entity ei and Γ(t−i) denotes the set of pages that contain a hyperlink to any identified entity ej of the tweet t in the first stage excluding ei. The Jaccard distance between ei and t is Jac(ei, t) = |Γ(ei) ∩Γ(t−i)| |Γ(ei) ∪Γ(t−i)|. In addition to the Jaccard distance, we add one additional binary feature to indicate if the current entity has the highest Jaccard distance among all entities for this mention candidate. 4 Experiments Our experiments are designed to answer the following three research questions in the context of tweet entity linking: • Do non-linear learning algorithms perform better than linear learning algorithms? • Do structured entity linking models perform better than non-structured ones? • How can we best capture the relationships between entities? 4.1 Evaluation Methodology and Data We evaluate each entity linking system using two evaluation policies: Information Extraction (IE) driven evaluation and Information Retrieval (IR) driven evaluation. For both evaluation settings, precision, recall and F1 scores are reported. Our data is constructed from two publicly available sources: Named Entity Extraction & Linking (NEEL) Challenge (Cano et al., 2014) datasets, and the datasets released by Fang and Chang (2014). Note that we gather two datasets from Fang and Chang (2014) and they are used in two different evaluation settings. We refer to these two datasets as TACL-IE and TACL-IR, respectively. We perform some data cleaning and unification on 508 these sets.7 The statistics of the datasets are presented in Table 1. IE-driven evaluation The IE-driven evaluation is the standard evaluation for an end-to-end entity linking system. We follow Carmel et al. (2014) and relax the definition of the correct mention boundaries, as they are often ambiguous. A mention boundary is considered to be correct if it overlaps (instead of being the same) with the gold mention boundary. Please see (Carmel et al., 2014) for more details on the procedure of calculating the precision, recall and F1 score. The NEEL and TACL-IE datasets have different annotation guidelines and different choices of knowledge bases, so we perform the following procedure to clean the data and unify the annotations. We first filter out the annotations that link to entities excluded by our knowledge base. We use the same knowledge base as the ERD 2014 competition (Carmel et al., 2014), which includes the union of entities in Wikipedia and Freebase. Second, we follow NEEL annotation guideline and re-annotate TACL-IE dataset. For instance, in order to be consistent with NEEL, all the user tags (e.g. @BarackObama) are re-labeled as entities in TACL-IE. We train all the models with NEEL Train dataset and evaluate different systems on NEEL Test and TACL-IE datasets. In addition, we sample 800 tweets from NEEL Train dataset as our development set to perform parameter tuning. IR-driven evaluation The IR-driven evaluation is proposed by Fang and Chang (2014). It is motivated by a key application of entity linking — retrieval of relevant tweets for target entities, which is crucial for downstream applications such as product research and sentiment analysis. In particular, given a query entity we can search for tweets based on the match with some potential surface forms of the query entity. Then, an entity linking system is evaluated by its ability to correctly identify the presence or absence of the query entity in every tweet. Our IR-driven evaluation is based on the TACL-IR set, which includes 980 tweets sampled for ten query entities of five entity types (roughly 100 tweets per entity). About 37% of the sampled tweets did not mention the query entity due to the anchor ambiguity. 7We plan to release the cleaned data and evaluation code if license permitted. Data #Tweet #Entity Date NEEL Train 2340 2202 Jul. ˜Aug. 11 NEEL Test 1164 687 Jul. ˜Aug. 11 TACL-IE 500 300 Dec. 12 TACL-IR 980 NA Dec. 12 Table 1: Statistics of data sets. 4.2 Experimental Settings Features We employ a total number of 37 dense features as our basic feature set. Most of the features are adopted from (Guo et al., 2013)8, including various statistical features such as the probability of the surface to be used as anchor text in Wikipedia. We also add additional Entity Type features correspond to the following entity types: Character, Event, Product and Brand. Finally, we include several NER features to indicate each mention candidate belongs to one the following NER types: Twitter user, Twitter hashtag, Person, Location, Organization, Product, Event and Date. Algorithms Table 2 summarizes all the algorithms that are compared in our experiments. First, we consider two linear structured learning algorithms: Structured Perceptron (Collins, 2002) and Linear Structured SVM (SSVM) (Tsochantaridis et al., 2004). For non-linear models, we consider polynomial SSVM, which employs polynomial kernel inside the structured SVM algorithm. We also include LambdaRank (Quoc and Le, 2007), a neuralbased learning to rank algorithm, which is widely used in the information retrieval literature. We further compare with MART, which is designed for performing multiclass classification using log loss without considering the structured information. Finally, we have our proposed log-loss SMART algorithm, as described in Section 3. 9 Note that our baseline systems are quite strong. Linear SSVM has been used in one of the stateof-the-art tweet entity linking systems (Guo et al., 2013), and the system based on MART is the winning system of the 2014 NEEL Challenge (Cano and others, 2014)10. Table 2 summarizes several properties of the algorithms. For example, most algorithms are struc8We consider features of Base, Capitalization Rate, Popularity, Context Capitalization and Entity Type categories. 9Our pilot experiments show that the log-loss SMART consistently outperforms the hinge-loss S-MART. 10Note that the numbers we reported here are different from the results in NEEL challenge due to the fact that we have cleaned the datasets and the evaluation metrics are slightly different in this paper. 509 Model Structured Non-linear Tree-based Structured Perceptron ✓ Linear SSVM ✓ Polynomial SSVM ✓ ✓ LambdaRank ✓ MART ✓ ✓ S-MART ✓ ✓ ✓ Table 2: Included algorithms and their properties. tured (e.g. they perform dynamic programming at test time) except for MART and LambdaRank, which treat mention candidates independently. Parameter tuning All the hyper-parameters are tuned on the development set. Then, we re-train our models on full training data (including the dev set) with the best parameters. We choose the soft margin parameter C from {0.5, 1, 5, 10} for two structured SVM methods. After a preliminary parameter search, we fixed the number of trees to 300 and the minimum number of documents in a leaf to 30 for all tree-based models. For LambdaRank, we use a two layer feed forward network. We select the number of hidden units from {10, 20, 30, 40} and learning rate from {0.1, 0.01, 0.001}. It is widely known that F1 score can be affected by the trade-off between precision and recall. In order to make the comparisons between all algorithms fairer in terms of F1 score, we include a post-processing step to balance precision and recall for all the systems. Note the tuning is only conducted for the purpose of robust evaluation. In particular, we adopt a simple tuning strategy that works well for all the algorithms, in which we add a bias term b to the scoring function value of Nil: F(x, yk = Nil) ←F(x, yk = Nil) + b. We choose the bias term b from values between −3.0 to 3.0 on the dev set and apply the same bias term at test time. 4.3 Results Table 3 presents the empirical findings for SMART and competitive methods on tweet entity linking task in both IE and IR settings. In the following, we analyze the empirical results in details. Linear models vs. non-linear models Table 3 clearly shows that linear models perform worse than non-linear models when they are restricted to the IE setting of the tweet entity linking task. The story is similar in IR-driven evaluation, with −2 −1 0 1 2 50 60 70 80 Bias F1 score SP Linear SSVM Poly. SSVM MART NN S-MART Figure 2: Balance precisions and recalls. X-axis corresponds to values of the bias terms for the special token Nil. Note that S-MART is still the overall winning system without tuning the threshold. the exception of LambdaRank. Among the linear models, linear SSVM demonstrates its superiority over Structured Perceptron on all datasets, which aligns with the results of (Tsochantaridis et al., 2005) on the named entity recognition task. We have many interesting observations on the non-linear models side. First, by adopting a polynomial kernel, the non-linear SSVM further improves the entity linking performances on the NEEL datasets and TACL-IR dataset. Second, LambdaRank, a neural network based model, achieves better results than linear models in IEdriven evaluation, but the results in IR-driven evaluation are worse than all the other methods. We believe the reason for this dismal performance is that the neural-based method tends to overfit the IR setting given the small number of training examples. Third, both MART and S-MART significantly outperform alternative linear and non-linear methods in IE-driven evaluation and performs better or similar to other methods in IR-driven evaluation. This suggests that tree-based non-linear models are suitable for tweet entity linking task. Finally, S-MART outperforms previous state-ofthe-art method Structured SVM by a surprisingly large margin. In the NEEL Test dataset, the difference is more than 10% F1. Overall, the results show that the shallow linear models are not expressive enough to capture the complex patterns in the data, which are represented by a few dense features. Structured learning models To showcase structured learning technique is crucial for entity linking with non-linear models, we compare S-MART against MART directly. As shown in 510 Model NEEL Dev NEEL Test TACL-IE TACL-IR P R F1 P R F1 P R F1 P R F1 Structured Perceptron 75.8 62.8 68.7 79.1 64.3 70.9 74.4 63.0 68.2 86.2 43.8 58.0 Linear SSVM 78.0 66.1 71.5 80.5 67.1 73.2 78.2 64.7 70.8 86.7 48.5 62.2 Polynomial SSVM 77.7 70.7 74.0 81.3 69.0 74.6 76.8 64.0 69.8 91.1 48.8 63.6 LambdaRank 75.0 69.0 71.9 80.3 71.2 75.5 77.8 66.7 71.8 85.8 42.4 56.8 MART 76.2 74.3 75.2 76.8 78.0 77.4 73.4 71.0 72.2 98.1 46.4 63.0 S-MART 79.1 75.8 77.4 83.2 79.2 81.1 76.8 73.0 74.9 95.1 52.2 67.4 + entity-entity 79.2 75.8 77.5 81.5 76.4 78.9 77.3 73.7 75.4 95.5 56.7 71.1 Table 3: IE-driven and IR-driven evaluation results for different models. The best results with basic features are in bold. The results are underlined if adding entity-entity features gives the overall best results. Table 3, S-MART can achieve higher precision and recall points compared to MART on all datasets in terms of IE-driven evaluation, and can improve F1 by 4 points on NEEL Test and TACL-IR datasets. The task of entity linking is to produce non-overlapping entity assignments that match the gold mentions. By adopting structured learning technique, S-MART is able to automatically take into account the non-overlapping constraint during learning and inference, and produce global optimal entity assignments for mention candidates of a tweet. One effect is that S-MART can easily eliminate some common errors caused by popular entities (e.g. new york in Figure 1). Modeling entity-entity relationships Entityentity relationships provide strong clues for entity disambiguation. In this paper, we use the simple two-stage approach described in Section 3.3 to capture the relationships between entities. As shown in Table 3, the significant improvement in IR-driven evaluation indicates the importance of incorporating entity-entity information. Interestingly, while IR-driven results are significantly improved, IE-driven results are similar or even worse given entity-entity features. We believe the reason is that IE-driven and IR-driven evaluations focus on different aspects of tweet entity linking task. As Guo et al. (2013) shows that most mentions in tweets should be linked to the most popular entities, IE setting actually pays more attention on mention detection sub-problem. In contrast to IE setting, IR setting focuses on entity disambiguation, since we only need to decide whether the tweet is relevant to the query entity. Therefore, we believe that both evaluation policies are needed for tweet entity linking. Balance Precision and Recall Figure 2 shows the results of tuning the bias term for balancing precision and recall on the dev set. The results show that S-MART outperforms competitive approaches without any tuning, with similar margins to the results after tuning. Balancing precision and recall improves F1 scores for all the systems, which suggests that the simple tuning method performs quite well. Finally, we have an interesting observation that different methods have various scales of model scores. 5 Related Work Linear structured learning methods have been proposed and widely used in the literature. Popular models include Structured Perceptron (Collins, 2002), Conditional Random Field (Lafferty et al., 2001) and Structured SVM (Taskar et al., 2004; Tsochantaridis et al., 2005). Recently, many structured learning models based on neural networks have been proposed and are widely used in language modeling (Bengio et al., 2006; Mikolov et al., 2010), sentiment classification (Socher et al., 2013), as well as parsing (Socher et al., 2011). Cortes et al. (2014) recently proposed a boosting framework which treats different structured learning algorithms as base learners to ensemble structured prediction results. Tree-based models have been shown to provide more robust and accurate performances than neural networks in some tasks of computer vision (Roe et al., 2005; Babenko et al., 2011) and information retrieval (Li et al., 2007; Wu et al., 2010), suggesting that it is worth to investigate tree-based non-linear models for structured learning problems. To the best of our knowledge, TreeCRF (Dietterich et al., 2004) is the only work that explores tree-based methods for structured learning problems. The relationships between TreeCRF and our work have been discussed in Section 2. 511 Early research on entity linking has focused on well written documents (Bunescu and Pasca, 2006; Cucerzan, 2007; Milne and Witten, 2008). Due to the raise of social media, many techniques have been proposed or tailored to short texts including tweets, for the problem of entity linking (Ferragina and Scaiella, 2010; Meij et al., 2012; Guo et al., 2013) as well as the related problem of named entity recognition (NER) (Ritter et al., 2011). Recently, non-textual information such as spatial and temporal signals have also been used to improve entity linking systems (Fang and Chang, 2014). The task of entity linking has attracted a lot of attention, and many shared tasks have been hosted to promote entity linking research (Ji et al., 2010; Ji and Grishman, 2011; Cano and others, 2014; Carmel et al., 2014). Building an end-to-end entity linking system involves in solving two interrelated sub-problems: mention detection and entity disambiguation. Earlier research on entity linking has been largely focused on the entity disambiguation problem, including most work on entity linking for wellwritten documents such as news and encyclopedia articles (Cucerzan, 2007) and also few for tweets (Liu et al., 2013). Recently, people have focused on building systems that consider mention detection and entity disambiguation jointly. For example, Cucerzan (2012) delays the mention detection decision and consider the mention detection and entity linking problem jointly. Similarly, Sil and Yates (2013) proposed to use a reranking approach to obtain overall better results on mention detection and entity disambiguation. 6 Conclusion and Future Work In this paper, we propose S-MART, a family of structured learning algorithms which is flexible on the choices of the loss functions and structures. We demonstrate the power of S-MART by applying it to tweet entity linking, and it significantly outperforms the current state-of-the-art entity linking systems. In the future, we would like to investigate the advantages and disadvantages between treebased models and other non-linear models such as deep neural networks or recurrent neural networks. Acknowledgments We thank the reviewers for their insightful feedback. We also thank Yin Li and Ana Smith for their valuable comments on earlier version of this paper. References S. Asur and B.A. Huberman. 2010. Predicting the future with social media. arXiv preprint arXiv:1003.5699. Boris Babenko, Ming-Hsuan Yang, and Serge Belongie. 2011. Robust object tracking with online multiple instance learning. Pattern Analysis and Machine Intelligence, IEEE Transactions on, pages 1619–1632. Yoshua Bengio, Holger Schwenk, Jean-S´ebastien Sen´ecal, Fr´ederic Morin, and Jean-Luc Gauvain. 2006. Neural probabilistic language models. In Innovations in Machine Learning, pages 137–186. R. C Bunescu and M. Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceedings of the European Chapter of the ACL (EACL), pages 9–16. AE Cano et al. 2014. Microposts2014 neel challenge. In Microposts2014 NEEL Challenge. Amparo E Cano, Giuseppe Rizzo, Andrea Varga, Matthew Rowe, Milan Stankovic, and Aba-Sah Dadzie. 2014. Making sense of microposts (# microposts2014) named entity extraction & linking challenge. Making Sense of Microposts (# Microposts2014). David Carmel, Ming-Wei Chang, Evgeniy Gabrilovich, Bo-June Paul Hsu, and Kuansan Wang. 2014. Erd’14: entity recognition and disambiguation challenge. In ACM SIGIR Forum, pages 63–77. Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the conference on Empirical methods in natural language processing (EMNLP), pages 1–8. Corinna Cortes, Vitaly Kuznetsov, and Mehryar Mohri. 2014. Learning ensembles of structured prediction rules. In Proceedings of ACL. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 708– 716. Silviu Cucerzan. 2012. The msr system for entity linking at tac 2012. In Text Analysis Conference. Thomas G Dietterich, Adam Ashenfelter, and Yaroslav Bulatov. 2004. Training conditional random fields via gradient tree boosting. In Proceedings of the twenty-first international conference on Machine learning (ICML), pages 28–35. Yuan Fang and Ming-Wei Chang. 2014. Entity linking on microblogs with spatial and temporal signals. Transactions of the Association for Computational Linguistics (ACL), pages 259–272. 512 P. Ferragina and U. Scaiella. 2010. TAGME: on-thefly annotation of short text fragments (by Wikipedia entities). In Proceedings of ACM Conference on Information and Knowledge Management (CIKM), pages 1625–1628. Jerome H Friedman. 2001. Greedy function approximation: a gradient boosting machine. Annals of Statistics, pages 1189–1232. Stephen Guo, Ming-Wei Chang, and Emre Kiciman. 2013. To link or not to link? a study on end-to-end tweet entity linking. In Proceedings of the North American Chapter of the Association for Computational Linguistics (NAACL), pages 1020–1030. Heng Ji and Ralph Grishman. 2011. Knowledge base population: Successful approaches and challenges. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 1148–1158. Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Griffitt, and Joe Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Third Text Analysis Conference (TAC). John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the 18th international conference on Machine learning (ICML), pages 282–289. Ping Li, Qiang Wu, and Christopher J Burges. 2007. Mcrank: Learning to rank using multiple classification and gradient boosting. In Advances in neural information processing systems (NIPS), pages 897– 904. Xiaohua Liu, Yitong Li, Haocheng Wu, Ming Zhou, Furu Wei, and Yi Lu. 2013. Entity linking for tweets. In Proceedings of the Association for Computational Linguistics (ACL), pages 1304–1311. Michael Mathioudakis and Nick Koudas. 2010. Twittermonitor: trend detection over the twitter stream. In Proceedings of the 2010 ACM SIGMOD International Conference on Management of data (SIGMOD), pages 1155–1158. E. Meij, W. Weerkamp, and M. de Rijke. 2012. Adding semantics to microblog posts. In Proceedings of International Conference on Web Search and Web Data Mining (WSDM), pages 563–572. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045–1048. D. Milne and I. H. Witten. 2008. Learning to link with Wikipedia. In Proceedings of ACM Conference on Information and Knowledge Management (CIKM), pages 509–518. Kevin P Murphy. 2012. Machine learning: a probabilistic perspective. MIT press. C Quoc and Viet Le. 2007. Learning to rank with nonsmooth cost functions. pages 193–200. A. Ritter, S. Clark, Mausam, and O. Etzioni. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of the Conference on Empirical Methods for Natural Language Processing (EMNLP), pages 1524–1534. Byron P Roe, Hai-Jun Yang, Ji Zhu, Yong Liu, Ion Stancu, and Gordon McGregor. 2005. Boosted decision trees as an alternative to artificial neural networks for particle identification. Nuclear Instruments and Methods in Physics Research Section A: Accelerators, Spectrometers, Detectors and Associated Equipment, pages 577–584. Sunita Sarawagi and William W Cohen. 2004. Semimarkov conditional random fields for information extraction. In Advances in Neural Information Processing Systems (NIPS), pages 1185–1192. Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In Proceedings of ACM Conference on Information and Knowledge Management (CIKM), pages 2369– 2374. Richard Socher, Cliff C Lin, Chris Manning, and Andrew Y Ng. 2011. Parsing natural scenes and natural language with recursive neural networks. In Proceedings of the 28th International Conference on Machine Learning (ICML), pages 129–136. Richard Socher, Alex Perelygin, Jean Y Wu, Jason Chuang, Christopher D Manning, Andrew Y Ng, and Christopher Potts. 2013. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1631–1642. Ben Taskar, Carlos Guestrin, and Daphne Roller. 2004. Max-margin markov networks. Advances in neural information processing systems, 16:25. Ioannis Tsochantaridis, Thomas Hofmann, Thorsten Joachims, and Yasemin Altun. 2004. Support vector machine learning for interdependent and structured output spaces. In Proceedings of the twentyfirst international conference on Machine learning (ICML), page 104. Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hofmann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output variables. In Journal of Machine Learning Research, pages 1453–1484. Qiang Wu, Christopher JC Burges, Krysta M Svore, and Jianfeng Gao. 2010. Adapting boosting for information retrieval measures. Information Retrieval, pages 254–270. 513
2015
49
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 42–52, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Describing Images using Inferred Visual Dependency Representations Desmond Elliott and Arjen P. de Vries Information Access Group Centrum Wiskunde & Informatica Amsterdam, The Netherlands [email protected], [email protected] Abstract The Visual Dependency Representation (VDR) is an explicit model of the spatial relationships between objects in an image. In this paper we present an approach to training a VDR Parsing Model without the extensive human supervision used in previous work. Our approach is to find the objects mentioned in a given description using a state-of-the-art object detector, and to use successful detections to produce training data. The description of an unseen image is produced by first predicting its VDR over automatically detected objects, and then generating the text with a template-based generation model using the predicted VDR. The performance of our approach is comparable to a state-ofthe-art multimodal deep neural network in images depicting actions. 1 Introduction Humans typically write the text accompanying an image, which is a time-consuming and expensive activity. There are many circumstances in which people are well-suited to this task, such as captioning news articles (Feng and Lapata, 2008) where there are complex relationships between the modalities (Marsh and White, 2003). In this paper we focus on generating literal descriptions, which are rarely found alongside images because they describe what can easily be seen by others (Panofsky, 1939; Shatford, 1986; Hodosh et al., 2013). A computer that can automatically generate these literal descriptions, filling the gap left by humans, may improve access to existing image collections or increase information access for visually impaired users. There has been an upsurge of research in this area, including models that rely on spatial relationships (Farhadi et al., 2010), corpus-based relationships (Yang et al., 2011), spatial and visual attributes (Kulkarni et al., 2011), n-gram phrase fusion from Web-scale corpora (Li et al., 2011), treesubstitution grammars (Mitchell et al., 2012), selecting and combining phrases from large imagedescription collections (Kuznetsova et al., 2012), using Visual Dependency Representations to capture spatial and corpus-based relationships (Elliott and Keller, 2013), and in a generative framework over densely-labelled data (Yatskar et al., 2014). The most recent developments have focused on deep learning the relationships between visual feature vectors and word-embeddings with language generation models based on recurrent neural networks or long-short term memory networks (Karpathy and Fei-Fei, 2015; Vinyals et al., 2015; Mao et al., 2015; Fang et al., 2015; Donahue et al., 2015; Lebret et al., 2015). An alternative thread of research has focused on directly pairing images with text, based on kCCA (Hodosh et al., 2013) or multimodal deep neural networks (Socher et al., 2014; Karpathy et al., 2014). We revisit the Visual Dependency Representation (Elliott and Keller, 2013, VDR), an intermediate structure that captures the spatial relationships between objects in an image. Spatial context has been shown to be useful in object recognition and naming tasks because humans benefit from the visual world conforming to their expectations (Biederman et al., 1982; Bar and Ullman, 1996). The spatial relationships defined in VDR are closely, but independently, related to cognitively plausible spatial templates (Logan and Sadler, 1996) and region connection calculus (Randell et al., 1992). In the image description task, explicitly modelling the spatial relationships between observed objects constrains how an image should be described. An example can be seen in Figure 1, where the training VDR identifies the defining relationship between the man and the laptop, which may be re42 A man is using a laptop nsubj dobj bike? -2.3 ... person? 3.5 laptop? 1.2 ... CNN person laptop beside VDR Parser R-CNN person laptop beside VDR Parser A person is using a laptop Language Generator Figure 1: We present an approach to inferring VDR training data from images paired with descriptions (top), and for generating descriptions from VDR (bottom). Candidates for the subject and object in the image are extracted from the description. An object detector1searches for the objects and deterministically produces a training instance, which is used to train a VDR Parser to predict the relationships between objects in unseen images. When an unseen image is presented to the model, we first extract N-candidate objects for the image. The detected objects are then parsed into a VDR structure, which is passed into a template-based language generator to produce a description of the image. alised as a “using”, “typing”, or “working” relationship between the objects. The main limitation of previous research on VDR has been the reliance on gold-standard training annotations, which requires trained annotators. We present the first approach to automatically inferring VDR training examples from natural scenes using only an object detector and an image description. Ortiz et al. (2015) have recently presented an alternative treatment of VDR within the context of abstract scenes and phrasebased machine translation. Figure 1 shows a detailed overview of our approach. At training time, we learn a VDR Parsing model from representations that are constructed by searching for the subject and object in the image. The description of an unseen image is generated using a templatebased generation model that leverages the VDR predicted over the top-N objects extracted from an object detector. We evaluate our method for inferring VDRs in an image description experiment on the Pascal1K (Rashtchian et al., 2010) and VL2K data sets (Elliott and Keller, 2013) against two models: the bi-directional recurrent neural network (Karpathy and Fei-Fei, 2015, BRNN) and MIDGE (Mitchell et al., 2012). The main finding is that the quality of the descriptions generated by our method 1The image of the R-CNN object detector was modified with permission from Girshick et al. (2014). depends on whether the images depict an action. In the VLT2K data set of people performing actions, the performance of our approach is comparable to the BRNN; in the more diverse Pascal1K dataset, the BRNN is substantially better than our method. In a second experiment, we transfer the VDR-based model from the VLT2K data set to the Pascal1K data set without re-training, which improves the descriptions generated in the Pascal1K data set. This suggests that refining how we extract training data may yield further improvements to VDR-based image description. The code and generated descriptions are available at http://github.com/elliottd/vdr/. 2 Automatically Inferring VDRs The Visual Dependency Representation is a structured representation of an image that explicitly models the spatial relationships between objects. In this representation, the spatial relationship between a pair of objects is encoded with one of the following eight options: above, below, beside, opposite, on, surrounds, infront, and behind. Previous work on VDR-based image description has relied on training data from expert human annotators, which is expensive and difficult to scale to other data sets. In this paper, we describe an approach to automatically inferring VDRs using only an object detector and the description of an image. Our aim is to define an automated version 43 Relation Definition Beside The angle between the subject and the object is either between 315◦ and 45◦or 135◦and 225◦. Above The angle between the subject and object is between 225◦and 315◦. Below The angle between the subject and object is between 45◦and 135◦. On More than 50% of the subject overlaps with the object. Surrounds More than 90% of the subject overlaps with the object. Table 1: The cascade of spatial relationships between objects in VDR. We always use the last relationship that matches. These definitions are mostly taken from (Elliott and Keller, 2013), except that we remove the 3D relationships. Angles are defined with respect to the unit circle, which has 0◦on the right. All relations are specific with respect to the centroid of the bounding boxes. of the human process used to create gold-standard data (Elliott and Keller, 2013). An inferred VDR is constructed by searching for the subject and object referred to in the description of an image using an object detector. If both the subject and object can be found in the image, a VDR is created by attaching the detected subject to the detected object, given the spatial relationship between the object bounding boxes. The spatial relationships that can be applied between subjects and objects are defined in the cascade defined in Table 1. The set of relationships was reduced from eight to six due to the difficulty in predicting the 3D relationships in 2D images (Eigen et al., 2014). The spatial relation selected for a pair of objects is determined by applying each template defined in Table 1 to the object pair. We use only the final matching relationship, although future work may consider applying multiple matching relationships between objects. Given a set of inferred VDR training examples, we train a VDR Parsing Model with the VDR+IMG feature set using only the inferred examples (Elliott et al., 2014). We tried training a model by combining the inferred and gold-standard VDRs but this lead to an erratic parsing model that would regularly predict flat structures instead of object– person 3.13 c. keyboard 1.22 laptop 0.77 sofa 0.61 waffle iron 0.47 tape player 0.21 banjo 0.14 accordion -0.16 iPod -0.26 vacuum -0.40 Figure 2: An example of the most confident object detections from the R-CNN object detector. object relationships. One possibility for this behaviour is the mismatch caused by removing the infront and behind relationships in the inferred training data. Another possible explanation is the gold-standard data contains deeper and more complex structures than the simple object–object structures we infer. 2.1 Linguistic Processing The description of an image is processed to extract candidates for the mentioned objects. We extract candidates from the nsubj and dobj tokens in the dependency parsed description2. If the parsed description does not contain both a subject and an object, as defined here, the example is discarded. 2.2 Visual Processing If the dependency parsed description contains candidates for the subject and object of an image, we attempt to find these objects in the image. We use the Regions with Convolutional Neural Network features object detector (Girshick et al., 2014, R-CNN) with the pre-trained bvlc reference ilsrvc13 detection model implemented in Caffe (Jia et al., 2014). This object detection model is able to detect 200 different types of objects, with a mean average precision of 31.4% in the ImageNet Large-Scale Visual Recognition Challenge3 (Russakovsky et al., 2014). The output of the object detector is a bounding box with real-valued confidence scores, as shown in 2The descriptions are Part-of-Speech tagged using the Stanford POS Tagger v3.1.0 (Toutanova et al., 2003) with the english-bidirectional-distsim pre-trained model. The tagged descriptions are then Dependency Parsed using Malt Parser v 1.7.2 (Nivre et al., 2007) with the engmalt.poly-1.7 pre-trained model. 3The state-of-the-art result for this task is 37.2% using a Network in Network architecture (Lin et al., 2014a); a pretrained detection model was not available in the Caffe Model Zoo at the time of writing. 44 A boy is using a laptop (a) on A man is riding a bike (b) above A woman is riding a bike (c) surrounds A woman is riding a horse (d) surrounds A man is playing a sax (e) surrounds A man is playing a guitar (f) beside The woman is wearing a helmet (g) surrounds Figure 3: Examples of the object detections and automatically inferred VDR. In each example, the object detector candidates were extracted from the description and the VDR relationships were determined by the cascade in Table 1. Automatically inferring VDR allows us to learn differences in spatial relationships from different camera viewpoints, such as people riding bicycles. Figure 2. The confidence scores are not probabilities and can vary widely across images. The words in a description that refer to objects in an image are not always within the constrained vocabulary of the object labels in the object detection model. We increase the chance of finding objects with two simple back-offs: by lemmatising the token, and transforming the token into its WordNet hypernym parent. If the subject and the object can be found in the image, we create an inferred VDR from the detections, otherwise we discard this training example. Figure 3 shows a collection of automatically inferred VDRs. One of the immediate benefits of VDR, as a representation, is that we can easily interpret the structures extracted from images. An example of helpful object orientation invariance can be seen in 3 (b) and (c), where VDR captures the two different types of spatial relationships between people and bicycles that are grounded in the verb “riding”. This type of invariance is useful and it suggests VDR can model interacting objects from various viewpoints. We note here the similarities between automatically inferred VDR and Visual Phrases (Sadeghi and Farhadi, 2011). The main difference between these models is that VDR is primarily concerned with object–object interactions for generation and retrieval tasks, whereas Visual Phrases were intended to model person– object interactions for activity recognition. 2.3 Building a Language Model We build a language model using the subjects, verbs, objects, and spatial relationships from the successfully constructed training examples. The subjects and objects take the form of the object detector labels to reduce the effects of sparsity. The verbs are found as the direct common verb parent of the subject and object in the dependency parsed sentence. We stem the verbs using morpha, to reduce sparsity, and inflect them in a generated description with +ing using morphg (Minnen et al., 2001). The spatial relationship between the subject and object region is used to help constrain language generation to produce descriptions, given observed spatial contexts in a VDR. 45 person laptop sofa banjo vacuum c=3.12 c=0.77 c=0.61 c=0.14 c=-0.40 beside root beside beside VDR Parser A person is using a laptop (0.84) A person is playing a banjo (0.71) A person is beside a vacuum (0.38)† A person is in the image (0.96)⋆ Language Generator Figure 4: An overview of VDR-constrained language generation. We extract the top-N objects from an image using an object detector and predict the spatial relationships between the objects using a VDR Parser trained over the inferred training data. Descriptions are generated for all parent–child subtrees in the VDR, and the final text has the highest combined corpus and visual confidence. †: only generated is there are no verbs between the objects in the language model; ⋆: only generated if there are no verbs between any pairs of objects in the image. 3 Generating Descriptions The description of an image is generated using a template-based language generation model designed to exploit the structure encoded in VDR. The language generation model extends Elliott and Keller (2013) with the visual confidence scores from the object detector. Figure 4 shows an overview of the generation process. The top-N objects are extracted from an image using the pre-trained R-CNN object detector (see Section 2.2 for more details). We remove nonmaximal detections with the same class label that overlap by more than 30%. The objects are then parsed into a VDR structure using the VDR Parser trained on the automatically inferred training data. Given the VDR over the set of detected objects, we generate all possible descriptions of the image that can be produced in a depth-first traversal of the VDR. A description is assigned a score that combines the corpus-based evidence and visual confidence of the objects selected for the description. The descriptions are generated using the following template: DT head is V DT child. In this template, head and child are the labels of the objects that appear in the head and child positions of a specific VDR subtree. V is a verb determined from a subject-verb-object-spatial relation model derived from the training data descriptions. This model captures statistics about nouns that appear as subjects and objects, the verbs between them, and spatial relationships observed in the inferred training VDRs. The verb v that satisfies the V field is determined as follows: v = arg max v p(v|head, child, spatial) (1) p(v|head,child, spatial) = p(v|head) · p(child|v, head)· p(spatial|child, v, head) (2) If no verbs were observed between a particular object–object pair in the training corpus, V is filled using a back-off that uses the spatial relationship label between the objects in the VDR. The object detection confidence values, which are not probabilities and can vary substantially between images, are transformed into the range [0,1] using sgm(conf) = 1 1+e−conf . The final score assigned to a description is then used to rank all of the candidate descriptions, and the highest-scoring description is assigned to an image: score(head, v,child, spatial) = p(v|head, child, spatial)· sgm(head) · sgm(child) (3) If the VDR Parser does not predict any relationships between objects in an image, which may happen if all of the objects have never been observed in the training data, we use a back-off template to generate the description. In this case, the most confidently detected object in the image is used with the following template: A/An object is in the image. The number of objects N objects extracted from an unseen image is optimised by maximising the sentence-level Meteor score of the generated descriptions in the development data. 4 Experiments We evaluate our approach to automatically inferring VDR training data in an automatic image description experiment. The aim in this task is to 46 generate a natural language description of an image, which is evaluated directly against multiple reference descriptions. 4.1 Models We compare our approach against two state-ofthe-art image description models. MIDGE generates text based on tree-substitution grammar and relies on discrete object detections (Mitchell et al., 2012) for visual input. We make a small modification to MIDGE so it uses all of the top-N detected objects, regardless of the confidence of the detections4. BRNN is a multimodal deep neural network that generates descriptions directly from vector representations of the image and the description (Karpathy and Fei-Fei, 2015). The images are represented by the visual feature vector extracted from the FC7 layer of the VGG 16-layer convolutional neural network (Simonyan and Zisserman, 2015) and the descriptions are represented as a word-embedding vector. 4.2 Evaluation Measures We evaluate the generated descriptions using sentence-level Meteor (Denkowski and Lavie, 2011) and BLEU4 (Papineni et al., 2002), which have been shown to have moderate correlation with humans (Elliott and Keller, 2014). We adopt a jack-knifing evaluation methodology, which enables us to report human–human results (Lin and Och, 2004), using MultEval (Clark et al., 2011). 4.3 Data Sets We perform our experiments on two data sets: Pascal1K and VLT2K. The Pascal1K data set contains 1,000 images sampled from the PASCAL Object Detection Challenge data set (Everingham et al., 2010); each image is paired with five reference descriptions collected from Mechanical Turk. It contains a wide variety of subject matter drawn from the original 20 PASCAL Detection classes. The VLT2K data set contains 2,424 images taken from the trainval 2011 portion of the PASCAL Action Recognition Challenge; each image is paired with three reference descriptions, also collected from Mechanical Turk. We randomly split the images into 80% training, 10% validation, and 10% test. 4In personal communication with Margaret Mitchell, she explained that the object confidence thresholds for MIDGE were determined by visual inspection on held-out data, which we decided was not feasible for 200 new detectors. VLT2K Pascal1K Meteor BLEU Meteor BLEU VDR 16.0 14.8 7.4 9.0 BRNN 18.6 23.7 12.6 16.0 -genders 16.6 17.4 12.1 15.1 MIDGE 5.5 8.2 3.6 9.1 Human 26.4 23.3 21.7 20.6 Table 2: Sentence-level evaluation of the generated descriptions. VDR is comparable to BRNN when the images exclusively depict actions (VLT2K). In a more diverse data set, BRNN generates better descriptions (Pascal1K). 4.4 Results Table 2 shows the results of the image description experiment. The main finding of our experiments is that the performance of our proposed approach VDR depends on the type of images. We found that VDR is comparable to the deep neural network BRNN on the VLT2K data set of people performing actions. This is consistent with the hypothesis underlying VDR: it is useful to encode the spatial relationships between objects in images. The difference between the models is increased by the inability of the object detector used by VDR to predict bounding boxes for three objects (cameras, books, and phones) crucial to describing 30% of the images in this data set. In the more diverse Pascal1K data set, which does not necessarily depict people performing actions, the deep neural network generates substantially better descriptions than VDR and MIDGE. The tree-substitution grammar approach to generating descriptions used by MIDGE does not perform well on either data set. There is an obvious discrepancy between the BLEU4 and Meteor scores for the models. BLEU4 relies on lexical matching between sentences and thus penalises semantically equivalent descriptions. For example, identifying the gender of a person is important for generating a good description. However, object recognizers are not (yet) able to reliably achieve this distinction, and we only have a single recogniser for “persons”. The BRNN generates descriptions with “man” and “woman”, which leads to higher BLEU scores than our VDR model, but this is based on corpus statistics than the observed visual information. Me47 VDR is better VDR: A person is playing a saxophone. BRNN: A man is playing a guitar VDR: A person is playing a guitar. BRNN: A man is jumping off a cliff VDR: A person is playing a drum. BRNN: A man is standing on a BRNN is better VDR: A person is using a computer. BRNN: A man is jumping on a trampoline VDR: A person is riding a horse. BRNN: A group of people riding horses VDR: A person is below sunglasses. BRNN: A man is reading a book Equally good VDR: A person is sitting a table. BRNN: A man is sitting on a chair VDR: A person is using a laptop. BRNN: A man is using a computer VDR: A person is riding a horse. BRNN: A man is riding a horse Equally bad VDR: A person is holding a microphone. BRNN: A man is taking a picture VDR: A person is driving a car. BRNN: A man is sitting on a phone VDR: A person is driving a car. BRNN: A man is riding a bike Figure 5: Examples of descriptions generated using VDR and the BRNN in the VLT2K data set. Keen readers are encouraged to inspect the second image with a magnifying glass or an object detector. 48 2 4 6 8 10 12 14 16 18 20 Number of detected objects 12 13 14 15 16 17 18 Score Meteor BLEU4 Figure 6: Optimising the number of detected objects against generated description Meteor scores for our model. Improvements are seen until eight objects, which suggests good descriptions do not always need the most confident detections. teor is able to back-off from “man” or “woman” to “person” and still give partial credit to the description. If we replace the gendered referents in the descriptions generated by the BRNN, its performance on the VLT2K data set drops by 2.0 Meteor points and 6.3 BLEU points. Figure 6 shows the effect of optimising the number of objects extracted from an image against the eventual Meteor score of a generated description in the validation data. It can be seen that the most confidently predicted objects are not always the most useful objects for generating descriptions. Interestingly, the quality of the descriptions does not significantly decrease with an increased number of detected objects, suggesting our model formulation is appropriately discarding unsuitable detections. Figure 5 shows examples of the descriptions generated by VDR and BRNN on the VLT2K validation set. The examples where VDR generates better descriptions than BRNN are because the VDR Parser makes good decisions about which objects are interacting in an image. In the examples where the BRNN is better than VDR, we see that the multimodal RNN language model succeeds at describing intransitive verbs, group events, and objects not present in the R-CNN object detector. Both models generate bad descriptions when the visual input pushes them in the wrong direction, seen at the bottom of the figure. VLT →Pascal Meteor BLEU VDR 7.4 →8.2 9.1 →9.2 BRNN 12.6 →8.1 16.0 →10.2 Table 3: Sentence-level scores when transferring models directly between data sets with no retraining. The VDR-based approach generates better descriptions in the Pascal1K data set if we transfer the model from the VLT2K data set. 4.5 Transferring Models The main reason for the low performance of VDR on the Pascal1K data set is that the linguistic and visual processing steps (Section 2) discard too many training examples. We found that only 190 of the 4,000 description in the training data were used to infer VDRs. This was because most of the descriptions did not contain both a subject and an object, as required by our method. This observation led us to perform a second experiment where we transferred the VDR Parsing and Language Generation models between data sets. The aim of this experiment was to determine whether VDR simply cannot work on more widely diverse data sets, or whether the process we defined to replicate human VDR annotation was too strict. Table 3 shows the results of the model transfer experiment. In general, neither model is particularly good at transferring between data sets. This could be attributed to the shift in the types of scenes depicted in each data set. However, transferring VDR from the VLT2K to the Pascal1K data set improves the generated descriptions from 7.4 →8.2 Meteor points. The performance of BRNN substantially decreases when transferring between data sets, suggesting that the model may be overfitting its training domain. 4.6 Discussion Notwithstanding the conceptual differences between multi-modal deep learning and learning an explicit spatial model of object–object relationships, two key differences between the BRNN and our approach are the nature visual input and the language generation models. The neural network model can readily use the pre-softmax visual feature vector from any of the pre-trained models available in the Caffe Model 49 Zoo, whereas VDR is currently restricted to discrete object detector outputs from those models. The implication of this is that the VDR-based approach is unable to describe 30% of the data in the VLT2K data set. This is due to the object detection model not recognising crucial objects for three of the action classes: cameras, books, and telephones. We considered using the VGG-16 pretrained model from the ImageNet Recognition and Localization task in the RCNN object detector, thus mirroring the detection model used by the neural network. Frustratingly, this does not seem possible because none of the 1,000 types of objects in the recognition task correspond to a person-type of entity. One approach to alleviating this problem could be to use weakly-supervised object localisation (Oquab et al., 2014). The template-based language generation model used by VDR lacks the flexibility to describe interesting prepositional phrases or variety within its current template. An n-gram language generator, such as the phrase-based approaches of (Ortiz et al., 2015; Lebret et al., 2015), that works within the constraints imposed by VDR structure may generate better descriptions of images than the current template. 5 Conclusions In this paper we showed how to infer useful and reliable Visual Dependency Representations of images without expensive human supervision. Our approach was based on searching for objects in images, given a collection of co-occurring descriptions. We evaluated the utility of the representations on a downstream automatic image description task on two data sets, where the quality of the generated text largely depended on the data set. In a large data set of people performing actions, the descriptions generated by our model were comparable to a state-of-the-art multimodal deep neural network. In a smaller and more diverse data set, our approach produced poor descriptions because it was unable to extract enough useful training examples for the model. In a follow-up experiment that transferred the VDR Parsing and Language Generation model between data, we found improvements in the diverse data set. Our experiments demonstrated that explicitly encoding the spatial relationships between objects is a useful way of learning how to describe actions. There are several fruitful opportunities for future work. The most immediate improvement may be found with broader coverage object detectors. It would be useful to search for objects using multiple pre-trained visual detection models, such as a 200-class ImageNet Detection model and a 1,000-class ImageNet Recognition and Localisation model. A second strand of further work would be to relax the strict mirroring of human annotator behaviour when searching for subjects and objects in an image. It may be possible to learn good representations using only the nouns in the POS tagged description. Our current approach strictly limits the inferred VDRs to transitive verbs; images with descriptions such as “A large cow in a field” or “A man is walking” are also a focus for future relaxations of the process for creating training data. Another direction for future work would be to use a n-gram based language model constrained by the structured predicted in VDR. The current template based method is limiting the generation of objects that are being correctly realised in images. Tackling the aforementioned future work opens up opportunities to working with larger and more diverse data sets such as the Flickr8K (Hodosh et al., 2013), Flickr30K (Young et al., 2014), and MS COCO (Lin et al., 2014b) or larger action recognition data sets such as TUHOI (Le et al., 2014) or MPII Human Poses (Andriluka et al., 2014). Acknowledgements We thank the anonymous reviewers for their comments, and members of LaCo at ILLC and WILLOW at INRIA for comments on an earlier version of the work. We thank the Database Architectures Group and the Life Sciences Group at CWI for access to their NVIDIA Tesla K20 GPUs. D. Elliott is funded by an Alain Bensoussain Career Development Fellowship, A. P. de Vries is partially funded by COMMIT/. References Mykhaylo Andriluka, Leonid Pishchulin, Peter Gehler, and Bernt Schiele. 2014. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. In CVPR ’14, pages 3686–3693, Columbus, OH, US. Moshe Bar and Shimon Ullman. 1996. Spatial Context in Recognition. Perception, 25(3):343–52. Irving Biederman, Robert J Mezzanotte, and Jan C Rabinowitz. 1982. Scene perception: Detecting 50 and judging objects undergoing relational violations. Cognitive Psychology, 14(2):143–177. JH Clark, Chris Dyer, Alon Lavie, and NA Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In ACL-HTL ’11, pages 176–181, Portland, OR, U.S.A. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In SMT at EMNLP ’11, Edinburgh, Scotland, U.K. Jeff Donahue, Lisa Anne Hendricks, Sergio Guadarrama, Marcus Rohrbach, Subhashini Venugopalan, Kate Saenko, and Trevor Darrell. 2015. Longterm Recurrent Convolutional Networks for Visual Recognition and Description. In CVPR ’15, Boston, MA, U.S.A. David Eigen, Christian Puhrsch, and Rob Fergus. 2014. Depth Map Prediction from a Single Image using a Multi-Scale Deep Network. In NIPS 27, Lake Tahoe, CA, U.S.A, June. Desmond Elliott and Frank Keller. 2013. Image Description using Visual Dependency Representations. In EMNLP ’13, pages 1292–1302, Seattle, WA, U.S.A. Desmond Elliott and Frank Keller. 2014. Comparing Automatic Evaluation Measures for Image Description. In ACL ’14, pages 452–457, Baltimore, MD, U.S.A. Desmond Elliott, Victor Lavrenko, and Frank Keller. 2014. Query-by-Example Image Retrieval using Visual Dependency Representations. In COLING ’14, pages 109–120, Dublin, Ireland. Mark Everingham, Luc Van Gool, Christopher Williams, John Winn, and Andrew Zisserman. 2010. The PASCAL Visual Object Classes Challenge. IJCV, 88(2):303–338. Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Doll´ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, and Geoffrey Zweig. 2015. From Captions to Visual Concepts and Back. In CVPR ’15, Boston, MA, U.S.A. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: generating sentences from images. In ECCV ’10, pages 15–29, Heraklion, Crete, Greece. Yansong Feng and Mirella Lapata. 2008. Automatic Image Annotation Using Auxiliary Text Information. In ACL ’08, pages 272–280, Colombus, Ohio. Ross Girshick, Jeff Donahue, Trevor Darrell, and Jitendra Malik. 2014. Rich feature hierarchies for accurate object detection and semantic segmentation. CoRR, abs/1311.2. Micah Hodosh, Peter Young, and Julia Hockenmaier. 2013. Framing Image Description as a Ranking Task: Data, Models and Evaluation Metrics. JAIR, 47:853–899. Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross B. Girshick, Sergio Guadarrama, and Trevor Darrell. 2014. Caffe: Convolutional Architecture for Fast Feature Embedding. In MM ’14, pages 675–678, Orlando, FL, U.S.A. Andrej Karpathy and Li Fei-Fei. 2015. Deep VisualSemantic Alignments for Generating Image Descriptions. In CVPR ’15, Boston, MA, U.S.A. Andrej Karpathy, Armand Joulin, and Li Fei-Fei. 2014. Deep Fragment Embeddings for Bidirectional Image Sentence Mapping. In NIPS 28, Montreal, Quebec, Canada. Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2011. Baby talk: Understanding and generating simple image descriptions. In CVPR ’11, pages 1601–1608, Colorado Springs, CO, U.S.A. Polina Kuznetsova, Vicente Ordonez, Alexander C. Berg, Tamara L. Berg, and Yejin Choi. 2012. Collective Generation of Natural Image Descriptions. In ACL ’12, pages 359–368, Jeju Island, South Korea. Dieu-thu Le, Jasper Uijlings, and Raffaella Bernardi. 2014. TUHOI : Trento Universal Human Object Interaction Dataset. In WVL at COLING ’14, pages 17–24, Dublin, Ireland. Remi Lebret, Pedro O. Pinheiro, and Ronan Collobert. 2015. Phrase-based Image Captioning. In ICML ’15, Lille, France, February. Siming Li, Girish Kulkarni, Tamara L. Berg, Alexander C. Berg, and Yejin Choi. 2011. Composing simple image descriptions using web-scale n-grams. In CoNLL ’11, pages 220–228, Portland, OR, U.S.A. Chin-Yew Lin and Franz Josef Och. 2004. Automatic evaluation of machine translation quality using longest common subsequence and skip-bigram statistics. In ACL ’04, pages 605–612, Barcelona, Spain. Min Lin, Qiang Chen, and Shuicheng Yan. 2014a. Network In Network. In ICLR ’14, volume abs/1312.4, Banff, Canada. Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, and Piotr Doll´ar. 2014b. Microsoft COCO: Common Objects in Context. In ECCV ’14, pages 740– 755, Zurich, Switzerland. 51 GD Logan and DD Sadler. 1996. A computational analysis of the apprehension of spatial relations. In Paul Bloom, Mary A. Peterson, Lynn Nadel, and Merrill F. Garrett, editors, Language and Space, pages 492–592. MIT Press. Junhua Mao, Wei Xu, Yi Yang, Yiang Wang, and Alan L. Yuille. 2015. Deep captioning with multimodal recurrent neural networks (m-rnn). In ICLR ’15, volume abs/1412.6632, San Diego, CA, U.S.A. Emily E. Marsh and Marilyn Domas White. 2003. A taxonomy of relationships between images and text. Journal of Documentation, 59(6):647–672. Guido Minnen, John Carroll, and Darren Pearce. 2001. Applied morphological processing of English. Natural Language Engineering, 7(3):207–223. Margaret Mitchell, Jesse Dodge, Amit Goyal, Kota Yamaguchi, Karl Stratos, Alyssa Mensch, Alex Berg, Tamara Berg, and Hal Daum. 2012. Midge : Generating Image Descriptions From Computer Vision Detections. In EACL ’12, pages 747–756, Avignon, France. Joakim Nivre, Johan Hall, Jens Nilsson, Atanas Chanev, G¨ulsen Eryigit, Sandra K¨ubler, Svetoslav Marinov, and Erwin Marsi. 2007. MaltParser: A language-independent system for data-driven dependency parsing. Natural Language Engineering, 13(2):1. Maxime Oquab, Leon Bottou, Ivan Laptev, and Josef Sivic. 2014. Learning and Transferring Mid-level Image Representations Using Convolutional Neural Networks. In CVPR ’14, pages 1717–1724, Columbus, OH, US. Luis M. G. Ortiz, Clemens Wolff, and Mirella Lapata. 2015. Learning to Interpret and Describe Abstract Scenes. In NAACL ’15, Denver, CO, U.S.A. Erwin Panofsky. 1939. Studies in Iconology. Oxford University Press. Kishore Papineni, Salim Roukos, Todd Ward, and WJ Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In ACL ’02, pages 311–318, Philadelphia, PA, U.S.A. DA Randell, Z Cui, and AG Cohn. 1992. A spatial logic based on regions and connection. In Principles of Knowledge Representation and Reasoning, pages 165–176. Cyrus Rashtchian, Peter Young, Micah Hodosh, and Julia Hockenmaier. 2010. Collecting image annotations using Amazon’s Mechanical Turk. In AMT at NAACL ’10, pages 139–147, Los Angeles, CA, U.S.A. Olga Russakovsky, Jia Deng, Hao Su, Jonathan Krause, Sanjeev Satheesh, Sean Ma, Zhiheng Huang, Andrej Karpathy, Aditya Khosla, Michael Bernstein, Alexander C Berg, and Li Fei-Fei. 2014. ImageNet Large Scale Visual Recognition Challenge. Mohammad A Sadeghi and Ali Farhadi. 2011. Recognition Using Visual Phrases. In CVPR ’11, pages 1745–1752, Colorado Springs, CO, U.S.A. Sara Shatford. 1986. Analysing the Subject of a Picture: A Theoretical Approach. Cataloging & Classification Quarterly, 6(3):39–62. Karen Simonyan and Andrew Zisserman. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In ICLR ’15, volume abs/1409.1, San Diego, CA, U.S.A. Richard Socher, Andrej Karpathy, Q Le, C Manning, and A Ng. 2014. Grounded Compositional Semantics for Finding and Describing Images with Sentences. TACL, 2:207–218. Kristina Toutanova, Dan Klein, and Christopher D Manning. 2003. Feature-Rich Part-of-Speech Tagging with a Cyclic Dependency Network. In HLTNAACL ’03, pages 173–180, Edmonton, Canada. Oriol Vinyals, Alexander Toshev, Samy Bengio, and Dumitru Erhan. 2015. Show and tell: A neural image caption generator. In CVPR ’15, Boston, MA, U.S.A. Yezhou Yang, Ching Lik Teo, Hal Daum´e III, and Yiannis Aloimonos. 2011. Corpus-Guided Sentence Generation of Natural Images. In EMNLP ’11, pages 444–454, Edinburgh, Scotland, UK. Mark Yatskar, Michel Galley, L Vanderwende, and L Zettlemoyer. 2014. See No Evil, Say No Evil: Description Generation from Densely Labeled Images. In *SEM, pages 110–120, Dublin, Ireland. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. TACL, 2:67– 78. 52
2015
5
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 514–523, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Entity Retrieval via Entity Factoid Hierarchy∗ Chunliang Lu, Wai Lam, Yi Liao Key Laboratory of High Confidence Software Technologies Ministry of Education (CUHK Sub-Lab) Department of Systems Engineering and Engineering Management The Chinese University of Hong Kong {cllu,wlam,yliao}@se.cuhk.edu.hk Abstract We propose that entity queries are generated via a two-step process: users first select entity facts that can distinguish target entities from the others; and then choose words to describe each selected fact. Based on this query generation paradigm, we propose a new entity representation model named as entity factoid hierarchy. An entity factoid hierarchy is a tree structure composed of factoid nodes. A factoid node describes one or more facts about the entity in different information granularities. The entity factoid hierarchy is constructed via a factor graph model, and the inference on the factor graph is achieved by a modified variant of Multiple-try Metropolis algorithm. Entity retrieval is performed by decomposing entity queries and computing the query likelihood on the entity factoid hierarchy. Using an array of benchmark datasets, we demonstrate that our proposed framework significantly improves the retrieval performance over existing models. 1 Introduction Entity retrieval, which aims at returning specific entities to directly answer a user’s query, has drawn much attention these years. Various entity retrieval tasks have been proposed, such as TREC Entity (Balog et al., 2012; Wang et al., 2011) and INEX-LD (Wang et al., 2012; Wang and Kang, 2012). Many existing entity retrieval models follow the document retrieval assumption: when issuing queries, users choose the words that may ∗The work described in this paper is substantially supported by grants from the Research Grant Council of the Hong Kong Special Administrative Region, China (Project Codes: 413510 and 14203414) and the Direct Grant of the Faculty of Engineering, CUHK (Project Code: 4055034). appear in the “entity pseudo-document”. Based on the assumption, these models construct internal entity representations by combining various entity descriptions, and use these representations to compute the rank of the candidate entities for a given entity query. These models include fielded versions of BM25 and Mixture of Language Models (Neumayer et al., 2012), Entity Language Model (Raghavan et al., 2004), Hierarchical Expert Model (Petkova and Croft, 2006), Structured Positional Entity Language Model (Lu et al., 2013). However, a closer examination of entity queries reveals that most of them are not simple uniform word samples from the “entity pseudo-document”. Instead, they can be decomposed into multiple parts, where each part describes a fact about target entities. For example, the query “National capitals situated on islands” describes two facts regarding a target entity: it is a national capital; it is located on an island. Compared to the assumption in document retrieval models, where query terms are assumed to be generated from a single document, these query terms can be regarded to be independently generated from two underlying documents. According to this observation, we propose that an entity query is generated via a two-step process: users first select facts that can distinguish target entities from the others; and then choose words that describe the selected facts. Based on the proposed query generation paradigm, we design a new entity retrieval framework. On one hand, an entity is modeled to have multiple internal representations, each regarding one or more closely related facts. On the other hand, an entity query is decomposed into one or more subqueries, each describing a fact about target entities. In this way, entity retrieval can be performed by combining the probabilities of subqueries being satisfied for each candidate entity. One of the central components of our proposed 514 he was born in 1961 he was born in August 1961 he was born in Honolulu Hawaii born×2, 1961×2, august×1 born×3, 1961×2, august×1, hawaii×1, honolulu×1 A B C D E born in Honolulu Hawaii born in 1961 born in August 1961 (a) born in Honolulu Hawaii born in 1961 born in August 1961 born 1961 august (b) born in Honolulu Hawaii born in 1961 born in August 1961 born 1961 august (c) Figure 1: An example of entity factoid hierarchy containing two factoids about Barack Obama retrieval framework is a novel entity representation known as entity factoid hierarchy. An entity factoid hierarchy is a tree structure composed of factoid nodes, which is automatically constructed from a collection of entity descriptions. We abuse the term “factoid” to denote a single piece of information regarding an entity. A factoid node in the hierarchy describes one or more factoids. Factoid nodes in different levels capture the information of different levels of detail (referred to as information granularities hereafter), where lower level nodes contain more detailed information and higher level nodes abstract the details away. The entity factoid hierarchy is constructed via a factor graph model, and the inference on the factor graph is achieved by a modified variant of Multiple-try Metropolis algorithm. Each factoid node is indexed separately as a pseudo-document. During retrieval, the query likelihood for a candidate entity are computed by transversing the factoid hierarchy. Compared to exiting entity retrieval models, our proposed framework exhibits two advantages: • By organizing entity descriptions in a hierarchical structure, detailed entity information is preserved and we can return finer confidence value. Suppose that the entity “Barack Obama” is only described by one sentence: “born in 1961”. Traditional entity models, which model an entity as a pseudo-document, would return high confidence value for the query “who is born in 1961”. However, as we add more and more sentences to describe “Barack Obama”, the confidence value returned for the query decreases due to the longer entity pseudo-document. This result is not desirable for entity retrieval, since adding more descriptions about other facts should not affect the confidence of existing facts. Our factoid hierarchy avoids this problem by preserving all the entity descriptions in a hierarchical structure. When performing retrieval, entity factoid hierarchy can be traversed to locate the best supporting description for the query. • By separating entity facts in different factoid nodes, our model prevent ambiguity caused by mixing terms describing different facts. Suppose “Barack Obama” is described by two sentences: “Barack Obama is a president of United States” and “Barack Obama is a graduate of Harvard Law School”, and our query is “Who is a president of Harvard Law School”. A traditional document retrieval model with a bag-of-word entity pseudo-document would return “Barack Obama” with high confidence, since all the query terms appear in the entity descriptions. But obviously, this result is not correct. In our factoid hierarchy, these two facts are separated in lower level factoid nodes. While higher level nodes are still mixed with terms from child nodes, they are penalized to avoid giving high confidence value. 2 Factoid Hierarchy 2.1 Hierarchy Representation As mentioned in the previous section, all the information regarding an entity is organized in a particular factoid hierarchy. We denote the term “factoid” as a single piece of information regarding an entity, such as the birth date of Barack Obama. A factoid node in the hierarchy describes one or more factoids. Each factoid node is associated with a bag-of-words vector to represent the factoid description. Factoid nodes in different depth encode information in different granularities. An example of an entity factoid hierarchy, regarding two factoids (birth date and birth place) about Barack Obama, is given in Figure 1. The 515 example hierarchy is constructed from three sentences about Barack Obama: he was born in 1961; he was born in August 1961; he was born in Honolulu Hawaii. These three sentences correspond to the leaf nodes A, B, and C respectively in Figure 1. In general, a leaf node in the factoid hierarchy comes directly from a sentence or a RDF triple describing the entity. Since it is extracted either from human written texts or from manually crafted structured databases, a leaf node represents the most exact representation regarding one or more factoids. During the construction of the hierarchy, intermediate nodes are formed as parents for nodes that contain closely related factoids. The factoid description for an intermediate node is the sum of bag-of-words vectors of its child nodes. In this way, intermediate nodes capture the words that are used more frequently with higher weights to describe the underlying factoids in a more general form. As we merge more nodes and move up in the hierarchy, intermediate nodes become blended with more different factoids. Node D in Figure 1 is an intermediate factoid node, as a parent node for nodes A and B both describing the birth date. The root node in an entity factoid hierarchy summarizes all the descriptions regarding an entity, which is similar to the “entity pseudodocument” used in some existing entity retrieval models. Each entity factoid hierarchy has only one root node. For example, node E in Figure 1 is the root node, and it contains words from all the three sentences. Note that the depth of a leaf node varies with the number of descriptions associated with the factoids. Some factoids may be associated with lots of detailed information and are expressed in many sentences, while others are only expressed in one or two sentences. For example, the factoid that Obama is elected president in 2008 may be described in many sentences and in different contexts; while the factoid that Obama is born in Kapiolani Maternity & Gynecological Hospital is only mentioned in a few sentences. In this case, factoid nodes associated with more details may have deeper hierarchical structure. 2.2 Factor Graph Model To construct the entity factoid hierarchy, we make use of a hierarchical discriminative factor graph model. A similar factor graph model has been proposed to solve the coreference resolution in (Singh et al., 2011; Wick et al., 2012). Here we design a factor graph model corresponding to the entity factoid hierarchy, together with new factor types and inference mechanism. Generally speaking, a factor graph is composed of two parts: a set of random variables and a set of factors that model the dependencies between random variables. An example of the factor graph construction corresponding to the factoid hierarchy involved in Figure 1 is given in Figure 2. In our factor graph approach, each factoid is represented as a random variable fi, corresponding to a rounded square node in Figure 2. The pairwise binary decision variable yij, denotes whether a factoid fi is a child of another factoid fj corresponding to a circle node in Figure 2. The set of factoids F plus the set of decision variables y are the random variables in our factor graph model. To model the dependency between factoids, we consider two types of factors. Ψp is the set of factors that consider the compatibility between two factoid nodes, i.e., to indicate whether two nodes have parent-child relationship. Ψu is the set of factors that measure the compatibility of the factoid node itself. Such factor is used to check whether a new intermediate node should be created. Factors are represented as square nodes in Figure 2. Given a factor graph model m, our target is to find the best assignments for the decision variable y that maximizes the objective function in Equation (1). P(y, F|m) = Y f∈F Ψp(f, fp)Ψu(f) (1) 2.3 Factors Design The pairwise factors Ψp and unit-wise factors Ψu compute the compatibility scores among factoid nodes. Each factor type is associated with a weight w to indicate the importance of the factor during inference. For the notation, the bag-of-words representation for a factoid node is denoted as d. We use superscripts p and c to denote the variables of parent nodes and child nodes. To capture the interrelations between factoid nodes, the following factors are used in our factor graph model. Bag-of-words similarity To check whether two factoid nodes refer to the same fact, we compare the similarity between their bag-of-words descriptions. We choose Kullback-Leibler divergence (KL divergence) as the similarity measure. By definition, the KL divergence of Q from P, denoted DKL(P||Q), is a measure of the informa516 born in Honolulu Hawaii born in 1961 born in August 1961 (a) born in Honolulu Hawaii born in 1961 born in August 1961 born 1961 august (b) born in Honolulu Hawaii born in 1961 born in August 1961 born 1961 august (c) Figure 2: Generation of an factoid hierarchy via factor graph inference. Factoid nodes are initialized as singletons in (a). During one step of sampling in (b), two factoid nodes are selected and one proposal is to add a common parent. If we accept the proposal, we end up with the factoid hierarchy in (c). tion lost when Q is used to approximate P. It is a non-symmetric measure and fits in our problem nicely, i.e., measuring whether a parent node is a more abstract representation of its child node. The compatibility score is computed as: −w1 · DKL(dp||dq) = −w1 · m X i=1 dp i × log dp i dc i  , (2) where dp i is the smoothed term frequency of the factoid description for the parent node; dc i is for the child node; w1 is a global weighting parameter among different factors. In fact, we have also explored other popular text similarity metrics summarized in (Huang, 2008), and find that KL divergence performs the best. Entropy penalty We penalize the entropy of the factoid description to encourage a smaller vocabulary of words describing the underlying factoids: −w2 · H(d) log ||d||0 , (3) where H(d) denotes the Shannon entropy for the bag-of-words representation of the factoid description d; ||d||0 is the number of unique terms in the factoid description. Structure penalty The depth of a factoid node indicates the level of information granularity. However, we also need to control the depth of the factoid hierarchy. A factoid node should not have too many levels. We define the depth penalty as: −w3 · |nd −||d||0 s |, (4) where nd is the depth of a factoid node and s is the parameter that controls the average depth of factoid nodes per term. In this way, we can control the average depth of factoid nodes in the entity factoid hierarchy. 2.4 Inference Exact inference is impossible for our factor graph model due to the large state space. Here we adopt a modified variant of Multiple-try Metropolis algorithm to conduct maximum probability estimation for inference, following the work in (Wick et al., 2013). At each sampling step, multiple changes to the current setting are proposed. The acceptance probability for a given proposal is equal to the likelihood ratio of the proposed hypothesis to the current hypothesis. In our case, we initialize the MCMC procedure to the singleton configuration, where each entity description, such as a sentence or a RDF triple, forms its own factoid hierarchy initially. At each sampling step, we randomly select two nodes and propose several alternative local modifications. If fi and fj are not connected, i.e., sharing no common child nodes, the following changes are proposed: • Add factoid fi as the parent of fj, if fj has no parent node; • Remove fj from its current parent, if fj has a parent; • Create a new common parent for fi and fj, if both fi and fj have no parent. Otherwise, if fi and fj are in the same cluster, the following changes are proposed: • Remove fj from its current parent; • Move fj’s children to fj’s parent and delete fj, if fj is an intermediate node. A sampling step of the inference process is illustrated in Figure 2. Initially, all the decision variables y are set to zero. That is, each factoid node is regarded as forming its own factoid hierarchy, as illustrated in Figure 2(a). During the inference, local modifications are proposed to the current factor graph hypothesis. For example, in Figure 2(b), 517 the two factoid nodes at the bottom are selected and proposed to add a new intermediate factoid as their common parent. If we accept the proposal, we get an intermediate factoid hierarchy as illustrated in Figure 2(c). The sampling process is iterated until no proposal has been accepted in a certain number of successive steps, or a maximum number of steps has been reached. Each entity factoid hierarchy is inferred separately, allowing us to parallelize the inference across multiple machines. 3 Entity Retrieval 3.1 Retrieval Model After we preprocess available information sources and construct the entity factoid hierarchy, we are ready to answer entity queries. Our retrieval model is based on the query likelihood model. Using Bayes’ rule, the probability that an entity e is a target entity for a query q can be written as: p(e|q) = p(q|e)p(e) p(q) . (5) The probability of the query p(q) is the same for all entities and can be ignored. Furthermore, we assume that the prior probability of an entity being a target entity is uniform. Thus, p(e) can also be ignored. The task is to rank an entity e in response to a query q by estimating the query generation probability p(q|e). To compute p(q|e), recall that our two-step query generation process assumes that users generate queries by first selecting facts and then choosing query words for each fact. Based on the query generation process, we first decompose the query q into m subqueries qi (discussed in Section 3.2). Then the probability p(q|e) can be computed as: p(q|e) = m Y i=1 p(qi|e) (6) = m Y i=1 n X k=1 p(qi|fk)p(fk|e) (7) ≃ m Y i=1 max k p(qi|fk). (8) Equation (6) decomposes the query into subqueries, assuming that all the subqueries are independent. Equation (7) iterates through all the factoid nodes fk in the factoid hierarchy of an entity e. Equation (8) simplifies the computation by assuming that the underlying factoid generating subquery qi is the factoid fk with the highest query generation probability. To compute p(qi|fk), the probability of the factoid fk generating the subquery qi, we use the multinomial unigram language model: p(qi|fk) = e(fk) Y j p(tj i|fk), (9) where tj i is the term j in the subquery qi. e(fk) is the penalty term for factoids containing many children: e(fk) = w · 1 c(fk), (10) where c(fk) is the number of child nodes for fk. To understand why we add this penalty term, consider a query “who is born in 2008”. Suppose “Barack Obama” is described by two sentences: “born in 1961” and “elected president in 2008”. When computing p(qi|fk) for the root node, although it contains both the terms “born” and “2008”, it should be penalized since the terms come from two different child nodes. 3.2 Query analysis As mentioned earlier, we decompose the original query q into multiple factoid subqueries qi. For long queries issued in a verbose sentence, such as “which presidents were born in 1945”, dependency parsing is performed (Klein and Manning, 2003) and the resulting dependency tree is used to split the original query. For short queries issued in keywords, such as “vietnam war movies”, we decompose it based on possible key concepts expressed in the query. Usually a short query only contains a single entity, which is used to segment the original query into subqueries. Furthermore, stop structures in verbose queries is removed, following the method proposed in (Huston and Croft, 2010). Here a stop structure is defined as a phrase which provides no information regarding the information needs, such as “tell me the”. We also inject target entity type information by replacing the leading “who ” as “person”, and “where” as “place” for all the queries. 3.3 Retrieval Process For the purpose of retrieval, each node in the entity factoid hierarchy is regarded as a pseudodocument describing one or more factoids about 518 the entity, and is indexed as a bag-of-words document during the preprocessing. The retrieval is performed in a two-step process. First, for each individual subquery, we retrieve top 1000 candidate entities by performing retrieval on all root nodes. This gives us an initial pool of candidate entities by merging the returned entities for subqueries. After that, for each candidate entity, we traverse its factoid hierarchy and compute the query generation probability p(q|e) using Equations (8) and (9). Top ranked entities are returned as retrieval results. 4 Experiments 4.1 Dataset We perform entity retrieval experiments using the DBpedia-Entity dataset used in (Balog and Neumayer, 2013). The dataset is a mixture of multiple entity retrieval datasets, covering entity queries of various styles such as keyword queries like “vietnam war movies” and verbose queries like “What is the capital of Canada”. Some query statistics are shown in Table 2. Query set #query avg(|q|) avg(#rel) INEX-XER 55 5.5 29.7 TREC Entity 17 6.7 12.9 SemSearch ES 130 2.7 8.6 SemSearch LS 43 5.4 12.5 QALD-2 140 7.9 41.2 INEX-LD 100 4.8 36.8 Total 485 5.3 26.7 Table 2: DBpedia-Entity dataset statistics The data corpus we use are DBpedia 3.9 and the corresponding English Wikipedia data dump on April 4, 2013. It should be noted that the original DBpedia-Entity benchmark only uses DBpedia for entity modeling (Balog and Neumayer, 2013). In our experiments, we also conducted another set of experiments which include full-text Wikipedia articles as additional entity descriptions, to evaluate the capacity of different models on handling free texts as information sources. 4.2 Comparison models and variants of our model For comparison, we have implemented the following two existing models: • BM25. BM25 is a popular document retrieval method and also used to perform entity retrieval (Balog and Neumayer, 2013). All the descriptions about an entity are aggregated into an entity pseudo-document. We use k1 = 1.2, b = 0.8 for the model parameter, similar to the original papers. • MLM-tc. The Mixture of Language Model represents an entity as a document with multiple fields, where each field is given a different weight for generating the query terms. MLM is often adopted to do entity retrieval (Neumayer et al., 2012). Here we adopt the MLM-tc model used in (Balog and Neumayer, 2013), where two fields are considered: title and content fields (described in Section 4.3). The parameters used are 0.8 for the title field and 0.2 for the content field. Note that both MLM-tc and BM25 are also compared in (Balog and Neumayer, 2013), and have shown the best MAP performances among all the compared models. For our models, the following two variants are implemented and compared. • Factoid Retrieval Model with Hierarchy (FRMwH). Our full model uses entity factoid graph as entity representation. Each factoid node is indexed as a bag-of-words document. The retrieval model described in Section 3 is employed. • Factoid Retrieval Model (FRM). This model does not use entity factoid hierarchy as entity representation. Instead, K-Means clustering algorithm is used to cluster the sentences into text clusters. Each text cluster is then indexed as a document. Compared to the FRMwH model, an entity only has a flat cluster of factoid descriptions. The same retrieval model is used. All the four models use the same query preprocessing techniques. 4.3 Setup The entity descriptions come from texts in Wikipedia articles and structured information from DBpedia. For DBpedia information, we consider top 1000 most frequent predicates as fields. We convert RDF predicates to free text by breaking the camelcase predicate name to terms, for example “birthPlace” is converted to “birth place”. For Wikipedia texts, we first remove all markup text such as images, categories. Infoboxes are also 519 Model INEX-XER TREC Entity SemSearch ES SemSearch LS QALD-2 INEX-LD Total MAP P@10 MAP P@10 MAP P@10 MAP P@10 MAP P@10 MAP P@10 MAP P@10 Experiments with only DBpedia information BM25 .1890 .2706 .1257 .1571 .2732 .2426 .2050 .2286 .2211 .1976 .1104 .2158 .1806 .1901 MLM-tc .1439 .2176 .1138 .1143 .2962 .2641 .1755 .1976 .1789 .1598 .1093 .2144 .1720 .1792 FRM ::: .2186 ::: .2186 .1548 :::: .1548 .2430 .2430 .2088 .2088 ::: .2462 :::: .2462 .1178 .1178 ::: .1854 ::: .1965 FRMwH ::: .2260 ::: .2260 ::: .1742 ::: .1742 .2270 .2270 .1642 .1642 ::: .2286 :::: .2286 ::: .1358 .1358 ::: .1905 ::: .2004 Experiments with both DBpedia and Wikipedia information BM25 .1313 .1887 .1374 .1667 .2916 .2526 .1867 .1833 .1552 .1253 .1698 .2680 .1848 .1821 MLM-tc .0777 .0981 .0942 .0875 .2794 .2398 .1071 .1071 .1024 .0771 .1501 .2370 .1515 .1452 FRM ::: .1922 ::: .1922 ::: .1601 :::: .1601 .2279 .2279 ::: .1729 :::: .1729 ::: .1965 ::: .1965 ::: .1793 :::: .1793 ::: .1934 ::: .1998 FRMwH ::: .2634 ::: .2634 ::: .1770 :::: .1770 .2267 .2267 ::: .1910 :::: .1910 ::: .2491 ::: .2491 .1554 .1554 ::: .2092 ::: .2130 Table 1: Retrieval performance for various models removed since the information is already well captured in DBpedia. Each Wikipedia article is then segmented to a list of sentences, which are considered as factoid descriptions regarding the entity. For the BM25 model, all the descriptions about an entity are aggregated into an entity pseudodocument. For the MLMtc model, the title field is constructed by combining DBpedia properties whose property names are ending with “title”, “name” or “label”, such as “fullName” (Neumayer et al., 2012), and the content field is the same as the entity pseudo-document used in the BM25 model. The inference algorithm for the entity factoid hierarchy is implemented based on the factorie package (McCallum et al., 2009). The parameters used in the inference are manually tuned on a small set of entities. The retrieval algorithms, including BM25 and Language Modeling, are implemented based on Apache Lucene1. For language models, Bayesian smoothing with Dirichlet priors is used, with parameter µ = 2000. For FRM, to cluster the entity descriptions, we use the K-Means clustering algorithm implemented in Carrot22. 4.4 Results We report two standard retrieval measures: mean average precision (MAP) and precision at 10 (P@10). Top 100 ranked entities are evaluated for each query. Two set of experiments are conducted: experiments with only DBpedia information; experiments with both DBpedia and Wikipedia information. The experiment result is shown in Table 1. To conduct the statistical significance analysis, we use two-tailed paired t-test at the 0.05 level. The symbols underline and ::::: wave::::::::: underline 1Apache Lucene: http://lucene.apache.org/ 2Carrot2: http://www.carrot2.org/ are used to indicate significant improvement of our model compared with the BM25 and MLMtc models respectively. The first set of rows in Table 1 show the performance of four models using only DBpedia information. Both of our models have better overall performance. On datasets with verbose queries, such as INEX-XER and TREC Entity, both our models outperform the baseline models. One reason is that our retrieval model relies on the assumption that verbose queries can be decomposed into multiple subqueries. The second set of rows show the performance of four models using both DBpedia and Wikipedia information. After adding the additional information from Wikipedia articles, MLM-tc attains much worse performance, while BM25 performs roughly the same. One possible reason is that Wikipedia articles contain much irrelevant information regarding entities, and these two existing models cannot easily make use of additional information. In contrast, with Wikipedia full-text available, both of our proposed models achieve obviously better performances. Our full model, FRMwH, has shown consistently better overall performance compared with the FRM model. It demonstrates that it is worthwhile to employ our proposed entity hierarchical structure for entity representation. 4.5 Analysis For the retrieval performance, we also perform a topic-level analysis between our model FRMwH and the baseline model BM25, shown in Figure 3. The X-axis represents individual query topics, ordered by average precision difference (shown on the Y-axis). Positive Y value indicates that FRMwH performs better than the BM25 model for the query. From the figure, most of 520 −0.8 −0.4 0.4 0.8 (a) −0.8 −0.4 0.4 0.8 (b) −0.8 −0.4 0.4 0.8 (c) −0.8 −0.4 0.4 0.8 (d) −0.8 −0.4 0.4 0.8 (e) −0.8 −0.4 0.4 0.8 (f) Figure 3: Topic-level differences between FRMwH and BM25. Positive values mean FRMwH is better. (a) INEX-XER; (b) TREC Entity; (c) SemSearch ES; (d) SemSearch LS; (e) QALD-2; (f) INEX-LD. queries are affected by using FRMwH model. On the datasets with verbose queries, such as INEXXER and TREC Entity, we can see most of the query are improved. FRMwH performs slightly worse for datasets like SemSearch ES which is mostly composed of keyword queries. For the queries that show little or no performance differences, manual inspection shows that both models fail to find any relevant results, due to the lack of supporting descriptions in Wikipedia and DBpedia. 5 Related Work Besides the entity retrieval models reviewed in Section 1, there are models that do not maintain an explicit entity representation. Instead, they compute the entity relevance score based on the co-occurance between entities and query terms in the documents directly. Most of these models are originally proposed for expertise retrieval, where the appearance of a person name indicates the association with the expertise mentioned in the same document. Typical models include voting model (Macdonald and Ounis, 2006), graph model (Serdyukov et al., 2008), etc. However, it is not easy to generalize these models for open domain entity retrieval. Entity models are also used in other fields besides entity retrieval. For example, entity topic models are used to perform entity prediction, classification of entity pairs, construction of entityentity network (Newman et al., 2006), as well as entity linking (Han and Sun, 2012). These models are not suitable for our retrieval framework. The decomposing of entity queries into factoid queries is related to query segmentation. Query segmentation has been used by search engines to support inverse lookup of words and phrases (Risvik et al., 2003; Bergsma and Wang, 2007). Our use of query decomposition is quite different compared to query segmentation. Besides query segmentation, query decomposition has also been used to facilitate the acquisition and optimization of high-order contextual term associations (Song et al., 2012). Our work is also related to the information extraction and knowledge representation field since our framework involves extraction and aggregation of knowledge from free texts. However, most existing approaches takes two extreme ways: either extract relations based on pre-defined ontology, such as DBpedia (Lehmann et al., 2014); or cluster relation without referring to some ontology, such as OpenIE (Etzioni et al., 2011). Though our main goal is not on constructing a complete knowledge base, we do leverage both existing knowledge bases as well as free text data. Semantic search also targets on returning answers directly (Pound et al., 2010; Blanco et al., 2011; Tonon et al., 2012; Kahng and Lee, 2012). However, they are mainly based on structured linked data, as well as structured query language like SPARQL. While this is an effective approach if we have a powerful thorough knowledge base, in practice many facts cannot be effectively represented as linked data. Only a small set of relations (thousands in DBpedia) have been defined in the ontology, such as “birthPlace”. Furthermore, even if we can define a formal representation of human knowledge, retrieve them effectively is still a problem due to the difficulty of transforming the human query into a structured query on a knowledge base. 6 Conclusions We propose that an entity query is generated in a two-step process: users first select the facts that can distinguish target entities from the others; then choose words to express those facts. Following this motivation, we propose a retrieval framework by decomposing the original query into factoid queries. We also propose to construct an entity 521 factoid hierarchy as the entity model for the purpose of entity retrieval. Our entity factoid hierarchy can integrate information of different granularities from both free text and structured data. Extensive experiments demonstrate the effectiveness of our framework. References Krisztian Balog and Robert Neumayer. 2013. A test collection for entity search in DBpedia. In Proceedings of the 36th International ACM SIGIR conference on Research and Development in Information Retrieval, pages 737–740. K. Balog, P. Serdyukov, and A. P. de Vries. 2012. Overview of the TREC 2011 entity track. In Proceedings of the Twentieth Text REtrieval Conference. Shane Bergsma and Qin Iris Wang. 2007. Learning noun phrase query segmentation. In Proc. EMNLPCoNLL, pages 819–826. Roi Blanco, Harry Halpin, Daniel M. Herzig, Peter Mika, Jeffrey Pound, and Henry S. Thompson. 2011. Entity search evaluation over structured web data. In Proceedings of the 1st International Workshop on Entity-Oriented Search, EOS ’11. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. 2011. Open information extraction: The second generation. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, pages 3–10. Xianpei Han and Le Sun. 2012. An entity-topic model for entity linking. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 105– 115. Anna Huang. 2008. Similarity measures for text document clustering. In Proceedings of the Sixth New Zealand Computer Science Research Student Conference, pages 49–56. Samuel Huston and W. Bruce Croft. 2010. Evaluating verbose query processing techniques. In Proceedings of the 33rd International ACM SIGIR Conference on Research and Development in Information Retrieval, pages 291–298. Minsuk Kahng and Sang-goo Lee. 2012. Exploiting paths for entity search in rdf graphs. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’12, pages 1027–1028. Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computational Linguistics - Volume 1, ACL ’03, pages 423– 430. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, and Christian Bizer. 2014. DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, 6(2):167–195. Chunliang Lu, Lidong Bing, and Wai Lam. 2013. Structured positional entity language model for enterprise entity retrieval. In Proceedings of the 22Nd ACM International Conference on Conference on Information & Knowledge Management, CIKM ’13, pages 129–138. Craig Macdonald and Iadh Ounis. 2006. Voting for candidates: adapting data fusion techniques for an expert search task. In Proceedings of the 15th ACM International Conference on Information and Knowledge Management, pages 387–396. Andrew McCallum, Karl Schultz, and Sameer Singh. 2009. FACTORIE: Probabilistic programming via imperatively defined factor graphs. In Neural Information Processing Systems (NIPS), pages 1249– 1257. Robert Neumayer, Krisztian Balog, and Kjetil Nrvg. 2012. When simple is (more than) good enough: Effective semantic search with (almost) no semantics. In Advances in Information Retrieval, pages 540–543. David Newman, Chaitanya Chemudugunta, and Padhraic Smyth. 2006. Statistical entity-topic models. In Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 680–686. D. Petkova and W.B. Croft. 2006. Hierarchical language models for expert finding in enterprise corpora. In Tools with Artificial Intelligence, 2006. ICTAI ’06. 18th IEEE International Conference on, pages 599–608, Nov. Jeffrey Pound, Peter Mika, and Hugo Zaragoza. 2010. Ad-hoc object retrieval in the web of data. In Proceedings of the 19th international conference on World wide web, WWW ’10, pages 771–780. Hema Raghavan, James Allan, and Andrew Mccallum. 2004. An exploration of entity models, collective classification and relation description. In Proceedings of KDD Workshop on Link Analysis and Group Detection, pages 1–10. K. M. Risvik, T. Mikolajewski, and P. Boros. 2003. Query segmentation for web search. In Proceedings of the Twelfth International World Wide Web Conference (Poster session). 522 Pavel Serdyukov, Henning Rode, and Djoerd Hiemstra. 2008. Modeling multi-step relevance propagation for expert finding. In Proceeding of the 17th ACM Conference on Information and Knowledge Mining, pages 1133–1142. Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2011. Large-scale cross-document coreference using distributed inference and hierarchical models. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 793–803. Dawei Song, Qiang Huang, Peter Bruza, and Raymond Lau. 2012. An aspect query language model based on query decomposition and high-order contextual term associations. Comput. Intell., 28(1):1– 23, February. Alberto Tonon, Gianluca Demartini, and Philippe Cudr´e-Mauroux. 2012. Combining inverted indices and structured search for ad-hoc object retrieval. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, SIGIR ’12, pages 125–134. Qiuyue Wang and Jinglin Kang. 2012. Integrated retrieval over structured and unstructured data. In Pamela Forner, Jussi Karlgren, and Christa Womser-Hacker, editors, CLEF (Online Working Notes/Labs/Workshop), pages 42–44. Zhanyi Wang, Wenlong Lv, Heng Li, Wenyuan Zhou, Li Zhang, Xiao Mo, Liaoming Zhou, Weiran Xu, Guang Chen, and Jun Guo. 2011. PRIS at TREC 2011 entity track: Related entity finding and entity list completion. In TREC. Qiuyue Wang, Jaap Kamps, Georgina Ram´ırez Camps, Maarten Marx, Anne Schuth, Martin Theobald, Sairam Gurajada, and Arunav Mishra. 2012. Overview of the inex 2012 linked data track. In CLEF (Online Working Notes/Labs/Workshop). Michael Wick, Sameer Singh, and Andrew McCallum. 2012. A discriminative hierarchical model for fast coreference at large scale. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 379–388. Michael Wick, Sameer Singh, Harshal Pandya, and Andrew McCallum. 2013. A joint model for discovering and linking entities. In CIKM 2013 Workshop on Automated Knowledge Base Construction, pages 67–72. 523
2015
50
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 524–533, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Encoding Distributional Semantics into Triple-Based Knowledge Ranking for Document Enrichment Muyu Zhang1∗, Bing Qin1, Mao Zheng1, Graeme Hirst2, and Ting Liu1 1Research Center for Social Computing and Information Retrieval Harbin Institute of Technology, Harbin, China 2Department of Computer Science, University of Toronto, Toronto, ON, Canada {myzhang,qinb,mzheng,tliu}@ir.hit.edu.cn [email protected] Abstract Document enrichment focuses on retrieving relevant knowledge from external resources, which is essential because text is generally replete with gaps. Since conventional work primarily relies on special resources, we instead use triples of Subject, Predicate, Object as knowledge and incorporate distributional semantics to rank them. Our model first extracts these triples automatically from raw text and converts them into real-valued vectors based on the word semantics captured by Latent Dirichlet Allocation. We then represent these triples, together with the source document that is to be enriched, as a graph of triples, and adopt a global iterative algorithm to propagate relevance weight from source document to these triples so as to select the most relevant ones. Evaluated as a ranking problem, our model significantly outperforms multiple strong baselines. Moreover, we conduct a task-based evaluation by incorporating these triples as additional features into document classification and enhances the performance by 3.02%. 1 Introduction Document enrichment is the task of acquiring relevant background knowledge from external resources for a given document. This task is essential because, during the writing of text, some basic but well-known information is usually omitted by the author to make the document more concise. For example, Baghdad is the capital of Iraq is omitted in Figure 1a. A human will fill these gaps automatically with the background knowledge in his mind. However, the machine lacks both the ∗This work was partly done while the first author was visiting University of Toronto. necessary background knowledge and the ability to select. The task of document enrichment is proposed to tackle this problem, and has been proved helpful in many NLP tasks such as web search (Pantel and Fuxman, 2011), coreference resolution (Bryl et al., 2010), document cluster (Hu et al., 2009) and entity disambiguation (Sen, 2012). We can classify previous work into two classes according to the resources they rely on. The first line of work uses Wikipedia, the largest on-line encyclopedia, as a resource and introduces the content of Wikipedia pages as external knowledge (Cucerzan, 2007; Kataria et al., 2011; He et al., 2013). Most research in this area relies on the text similarity (Zheng et al., 2010; Hoffart et al., 2011) and structure information (Kulkarni et al., 2009; Sen, 2012; He et al., 2013) between the mention and the Wikipedia page. Despite the apparent success of these methods, most Wikipedia pages contain too much information, most of which is not relevant enough to the source document, and this causes a noise problem. Another line of work tries to improve the accuracy by introducing ontologies (Fodeh et al., 2011; Kumar and Salim, 2012) and structured knowledge bases such as WordNet (Nastase et al., 2010), which provide semantic information about words such as synonym (Sun et al., 2011) and antonym (Sansonnet and Bouchet, 2010). However, these methods primarily rely on special resources constructed with supervision or even manually, which are difficult to expand and in turn limit their applications in practice. In contrast, we wish to seek the benefits of both coverage and accuracy from a better representation of background knowledge: triples of Subject, Predicate, Object (SPO). According to Hoffart et al. (2013), these triples, such as LeonardCohen, wasBornIn, Montreal, can be extracted automatically from Wikipedia and other sources, which is compatible with the RDF data model (Staab and Studer, 2009). Moreover, by extracting these 524 Global Ranking Global Ranking S1: The coalition may never know if Iraqi president Saddam Hussein survived a U.S. air strike yesterday. S2: A B-1 bomber dropped four 2,000-pound bombs on a building in a residential area of Baghdad . S3: They had received intelligence reports that senior officials were meeting there, possibly including Saddam Hussein and his sons . Iraq Baghdad Saddam Hussein Capital hasChild Qusay Hussein k1: k2: (a) Source document: air strike aiming at Saddam in Baghdad Global Ranking Global Ranking S1: The coalition may never know if Iraqi president Saddam Hussein survived a U.S. air strike yesterday. S2: A B-1 bomber dropped four 2,000-pound bombs on a building in a residential area of Baghdad . S3: They had received intelligence reports that senior officials were meeting there, possibly including Saddam Hussein and his sons . Iraq Baghdad Saddam Hussein Capital hasChild Qusay Hussein k1: k2: (b) Two omitted relevant pieces of background knowledge Figure 1: An example of document enrichment: A source document about a U.S. air strike omitting two important pieces of background knowledge which are acquired by our framework. triples from multiple sources, we also get better coverage. Therefore, one can expect that this representation is helpful for better document enrichment by incorporating both accuracy and coverage. In fact, there is already evidence that this representation is helpful. Zhang et al. (2014) proposed a triple-based document enrichment framework which uses triples of SPO as background knowledge. They first proposed a search engine– based method to evaluate the relatedness between every pair of triples, and then an iterative propagation algorithm was introduced to select the most relevant triples to a given source document (see Section 2), which achieved a good performance. However, to evaluate the semantic relatedness between two triples, Zhang et al. (2014) primarily relied on the text of triples and used search engines, which makes their method difficult to re-implement and in turn limits its application in practice. Moreover, they did not carry out any task-based evaluation, which makes it uncertain whether their method will be helpful in real applications. Therefore, we instead use topic models, especially Latent Dirichlet Allocation (LDA), to encode distributional semantics of words and convert every triple into a real-valued vector, which is then used to evaluate the relatedness between a pair of triples. We then incorporate these triples into the given source document and represent them together as a graph of triples. Then a modified iterative propagation is carried out over the entire graph to select the most relevant triples of background knowledge to the given source document. To evaluate our model, we conduct two series of experiments: (1) evaluation as a ranking problem, and (2) task-based evaluation. We first treat this task as a ranking problem which inputs one document and outputs the top N most-relevant triples of background knowledge. Second, we carry out a task-based evaluation by incorporating these relevant triples acquired by our model into the original model of document classification as additional features. We then perform a direct comparison between the classification models with and without these triples, to determine whether they are helpful or not. On the first series of experiments, we achieve a MAP of 0.6494 and a P@N of 0.5597 in the best situation, which outperforms the strongest baseline by 5.87% and 17.21%. In the task-based evaluation, the enriched model derived from the triples of background knowledge performs better by 3.02%, which demonstrates the effectiveness of our framework in real NLP applications. 2 Background The most closely related work in this area is our own (Zhang et al., 2014), which used the triples of SPO as background knowledge. In that work, we first proposed a triple graph to represent the source document and then used a search engine– based iterative algorithm to rank all the triples. We describe this work in detail below. Triple graph Zhang et al. (2014) proposed the triple graph as a document representation, where the triples of SPO serve as nodes, and the edges between nodes indicate their semantic relatedness. There are two kinds of nodes in the triple graph: (1) source document nodes (sd-nodes), which are triples extracted from source documents, and (2) background knowledge nodes (bk-nodes), which are triples extracted from external sources. Both of them are extracted automatically with Reverb, a well-known Open Information Extraction system (Etzioni et al., 2011). There are also two kinds of edges: (1) an edge between a pair of sd-nodes, and (2) an edge between one sd-node and another bk-node, both of which are unidirectional. In the original representation, there are no edges between two bk-nodes because they treat the bk-nodes as recipients of relevance weight only. In this paper, we modify this setup and connect every pair of bknodes with an edge, so the bk-nodes serve as intermediate nodes during the iterative propagation process and contribute to the final performance too as shown in our experiments (see Section 5.1). 525 Relevance evaluation To compute the weight of a edge, Zhang et al. (2014) evaluate the semantic relatedness between two nodes with a search engine–based method. They first convert every node, which is a triple of SPO, into a query by combining the text of Subject and Object together. Then for every pair of nodes ti and tj, they construct three queries: p, q, and p ∩q, which correspond to the queries of ti, tj, and tj ∩tj, the combination of ti and t j. All these queries are put into a search engine to get H(p), H(q), and H(p ∩q), the numbers of returned pages for query p, p, and p∩q. Then the WebJaccard Coefficient (Bollegala et al., 2007) is used to evaluate r(i, j), the relatedness between ti and tj, according to Formula 1. r(i, j) = WebJaccard(p,q) =    0 if H(p∩q) ≤C H(p∩q) H(p)+H(q)−H(p∩q) otherwise. (1) Using r(i, j), Zhang et al. (2014) further define p(i, j), the probability of ti and tj propagating to each other, as shown in Formula 2. Here N is the set of all nodes, and δ(i, j) denotes whether an edge exists between two nodes or not. p(i, j) = r(i, j)×δ(i, j) ∑n∈N r(n, j)×δ(n, j) (2) Iterative propagation Considering that the source document D is represented as a graph of sd-nodes, so the relevance of background knowledge tb to D is naturally converted into that of tb to the graph of sd-nodes. Zhang et al. (2014) evaluate this relevance by propagating relevance weight from sd-nodes to tb iteratively. After convergence, the relevance weight of tb will be treated as the final relevance to D. There are in total n × n pairs of nodes, and their p(i, j) are stored in a matrix P. Zhang et al. (2014) use ⃗W = (w1,w2,...,wn) to denote the relevance weights of nodes, where wi indicates the relevance of ti to D. At the beginning, each wi of bk-nodes is initialized to 0, and each that of sd-nodes is initialized to its importance to D. Then ⃗W is updated to ⃗W ′ after every iteration according to Formula 3. They keep updating the weights of both sd-nodes and bk-nodes until convergence and do not distinguish them explicitly. ⃗W ′ = ⃗W ×P = ⃗W ×   p(1,1) p(1,2) ... p(1,n) p(2,1) p(2,2) ... p(2,n) ... ... ... ... p(n,1) p(n,2) ... p(n,n)   (3) 3 Methodology The key idea behind this work is that every document is composed of several units of information, which can be extracted into triples automatically. For every unit of background knowledge b, the more units that are relevant to b and the more relevant they are, the more relevant b will be to the source document. Based on this intuition, we first present both source document information and background knowledge together as a document-level triple graph as illustrated in Section 2. Then we use LDA to capture the distributional semantics of a triple by representing it as a vector of distributional probabilities over k topics and evaluate the relatedness between two triples with cosine-similarity. Finally, we propose a modified iterative process to propagate the relevance score from the source document information to the background knowledge and select the top n relevant ones. 3.1 Encoding distributional semantics LDA LDA is a popular generative probabilistic model, which was first introduced by Blei et al. (2003). LDA views every document as a mixture over underlying topics, and each topic as a distribution over words. Both the document-topic and the topic-word distributions are assumed to have a Dirichlet prior. Given a set of documents and a number of topics, the model returns θd, the topic distribution for each document d, and φz, the word distribution for every topic z. LDA assumes the following generative process for each document in a corpus D: 1. Choose N ∼Poisson(ξ). 2. Choose θ ∼Dir(α). (a) Choose a topic zn ∼Multinomial(θ). (b) Choose a word wn from p(wn|zn,β) conditioned on the topic zn. Here the dimensionality k of the Dirichlet distribution (and thus the dimensionality of the topic vari526 Figure 2: Graphical representation of LDA. The boxes represents replicates, where the inner box represents the repeated choice of N topics and words within a document, while the outer one represents the repeated generation of M documents. able z) is assumed to be known and fixed; θ is a kdimensional Dirichlet random variable, where the parameter α is a k-vector with components αi > 0; and the β indicates the word probabilities over topics, which is a matrix with βij = p(w j = 1|zi = 1). Figure 2 shows the representation of LDA as a probabilistic graphical model with three levels. There are two corpus-level parameters α and β , which are assumed to be sampled once in the process of generating a corpus; one document-level variable θd, which is sampled once per document; and two word-level variables zdn and wdn, which are sampled once for each word in each document. We employ the publicly available implementation of LDA, JGibbLDA21 (Phan et al., 2008), which has two main execution methods: parameter estimation (model building) and inference for new data (classification of a new document). Relevance evaluation Given a set of documents and the number of topics k, LDA will return φz, the word distribution over the topic z. So for every word wn, we get k distributional probabilities over k topics. We use pwnzi to denote the probability that wn appears in the ith topic zi, where i ≤k, zi ∈ Z, the set of k topics. Then we combine these k possibilities together as a real-valued vector⃗vwn to represent wn as shown in Formula 4. ⃗vwn = (pwnz1, pwnz2,..., pwnzk) (4) After getting the vectors of words, we employ an intuitive method to compute the vector of a triple t, by accumulating all the corresponding vectors of words appearing in t according to Formula 5. Considering that the elements of this newly generated vector indicate the distributional probabilities of t over k topics, we then normalize 1http://jgibblda.sourceforge.net/ it according to Formula 6 so that its elements sum to 1. This gives us ⃗vt, the real-valued vector of triple t, which captures its distributional probabilities over k topics. Here t corresponds to a triple of background knowledge or of source document, ptzi indicates the possibility of t to appear in the ith topic zi, and wn ∈t means that wn appears in t. ptzi = ∑ wn∈t pwnzi (5) ⃗vt = (ptz1, ptz2,..., ptzk) ∑k i=1 ptzi (6) Using the vectors of triples, we can easily compute the semantic relatedness between a pair of triples as their cosine-similarity according to Formula 7. Here A, B correspond to the real-valued vectors of two triples, r(A,B) denotes their semantic relatedness, and k is the number of topics, which is also the length of A (or B). A high value of r(A,B) usually indicates a close relatedness between A and B, and thus a higher probability of propagating to each other in the following modified iterative propagation illustrated in Section 3.2. r(A,B) =cos(A,B) = AB ∥A∥∥B∥ = ∑k i=1 AiBi q ∑k i=1 (Ai)2 q ∑k i=1 (Bi)2 (7) 3.2 Modified iterative propagation In this part, we propose a modified iterative propagation based ranking model to select the mostrelevant triples of background knowledge. There are three primary modifications to the original model of Zhang et al. (2014), all of which are shown more powerful in our experiments. First of all, the original model (Zhang et al., 2014) does not reset the relevance weight of sdnodes after every iteration. This results in a continued decrease of the relevance weight of sd-nodes, which weakens the effect of sd-nodes during the iterative propagation and in turn affects the final performance. To tackle this problem, we decrease the relevance weight of bk-nodes and increase that of sd-nodes according to a fixed ratio after every iteration, so as to ensure that the total weight of sd-nodes is always higher than that of bk-nodes. Note that although the relevance weights of bk-nodes are changed after the redistribution, the corresponding ranking of them is not changed because the redistribution is carried out 527 John Lennon Yoko Ono ?? Beatles sd-node bk-node bk-node Figure 3: The edge between two bk-nodes helps in the better evaluation of relatedness between the bk-node Yoko Ono and the sd-node Beatles. over all nodes accordingly. In our experiments, we tried different ratios and finally chose 10:1, with sd-nodes corresponding to 10 and bk-nodes to 1, which achieved the best performance. In addition, we also modify the triple graph, the representation of a document illustrated in Section 2, by connecting every pair of bk-nodes with an edge, which is not allowed in the original model. This modification was motivated by the intuition that the relatedness between bk-nodes also contributes to the better evaluation of relevance to the source document, because the bk-nodes can serve as the intermediate nodes during the iterative propagation over the entire graph. Figure 3 shows an example, where the bk-node John Lennon is close to both the sd-node Beatles and to another bknode Yoko Ono, so the relatedness between two bk-nodes John Lennon and Yoko Ono helps in better evaluation of the relatedness between the bknode Yoko Ono and the sd-node Beatles. We also modify the definition of p(i, j), the probability of two nodes ti and t j propagating to each other. Zhang et al. (2014) compute this probability according to Formula 2, which highlights the number of neighbors, but weakens the relatedness between nodes, due to the normalization. For instance, if a node tx has only one neighbor ty, no matter how low their relatedness is, their p(x,y) will still be equal to 1 in the original model, while another node with two equally but closely related neighbors will only get a probability of 0.5 for each neighbor. We modify this setup by removing the normalization process and computing p(i, j) as the relatedness between ti and tj directly, which is evaluated according to Formula 1 . 4 Encoding background knowledge into document classification In this part, we demonstrate that the introduction of relevant knowledge could be helpful to real NLP applications. In particular, we choose the document classification task as a demonstration, which aims to classify documents into predefined categories automatically (Sebastiani, 2002). We choose this task for two reasons: (1) This task has witnessed a booming interest in the last 20 years, due to the increased availability of documents in digital form and the ensuing need to organize them, so it is important in both research and application. (2) The state-of-the-art performance of this task is achieved by a series of topic model– based methods, which rely on the same model as we do, but make use of source document information only. However, there is always some omitted information and relevant knowledge, which cannot be captured from the source document. Intuitively, the recovery of this information will be helpful. If we can improve the performance by introducing extra background knowledge into existing framework of document classification, we can inference naturally that the improvement benefits from the introduction of this knowledge. Traditional methods primarily use topic models to represent a document as a topic vector. Then a SVM classifier takes this vector as input and outputs the class of the document. In this work, we propose a new framework for document classification to incorporate extra knowledge. Given a document to be classified, we select the top N mostrelevant triples of background knowledge with our model introduced in Section 3, all of which are represented as vectors of ⃗vt = (ptz1, ptz2,..., ptzk). Then we combine these N triples as a new vector⃗v ′ t, which is then incorporated into the original framework of document classification. Another SVM classifier takes ⃗v ′ t, together with the original features extracted from the source document, as input and outputs the category of the source document. To combine N triples as one, we employ an intuitive method by computing the average of N corresponding vectors in every dimension. One possible problem is how to decide N, the number of triples to be introduced. We first introduce a fixed amount of triples for every document. Moreover, we also select the triples according to their relevance weight to the source document (see Section 3.2) by setting a threshold of relevance weight first and selecting the triples whose weights are higher than the threshold. We further discuss the impact of different thresholds in Section 5.2. 528 5 Experiments To evaluate our model, we conduct two series of experiments: (1) We first treat this task as a ranking problem, which takes a document as input and outputs the ranked triples of background knowledge, and evaluate the ranking performance by computing the scores of MAP and P@N. (2) We also conduct a task-based evaluation, where document classification (see Section 4) is chosen as a demonstration, by enriching the background knowledge to the original framework as additional features and performing a direct comparison. 5.1 Evaluation as a ranking problem Data preparation The data is composed of two parts: source documents and background knowledge. For source documents, we use a publicly available Chinese corpus which consists of 17,199 documents and 13,719,428 tokens extracted from Internet news2 including 9 topics: Finance, IT, Health, Sports, Travel, Education, Jobs, Art, Military. We then randomly but equally select 600 articles as the set of source documents from 9 topics without data bias. We use all the other 16,599 documents of the same corpus as the source of background knowledge, and then introduce a wellknown Chinese open source tool (Che et al., 2010) to extract the triples of background knowledge from the raw text automatically. So the background knowledge also distributes evenly across the same 9 topics. We use the same tool to extract the triples of source documents too. Baseline systems As Zhang et al. (2014) argued, it is difficult to use the methods in traditional ranking tasks, such as information retrieval (Manning et al., 2008) and entity linking (Han et al., 2011; Sen, 2012), as baselines in this task, because our model takes triples as basic input and thus lacks some crucial information such as link structure. For better comparison, we implement three methods as baselines, which have been proved effective in relevance evaluation: (1) Vector Space Model (VSM), (2) Word Embedding (WE), and (3) Latent Dirichlet Allocation (LDA). Note that our model captures the distributional semantics of triples with LDA, while WE serves as a baseline only, where the word embeddings are acquired over the same corpus mentioned previously with 2http://www.sogou.com/labs/dl/c.html the publicly available tool word2vec3. Here we use ti, D, and wi to denote a triple of background knowledge, a source document, and the relevance of ti to D. For VSM, we represent both ti and D with a tf-idf scheme first (Salton and McGill, 1986) and compute wi as their cosinesimilarity. For WE, we first convert both ti and the triples extracted from D into real-valued vectors with WE and then compute wi by accumulating all the cosine-similarities between ti and every triple from D. For LDA, we represent ti as a vector with our model introduced in Section 3.1 and get the vector of D directly with LDA. Then we evaluate their relevance of ti to D by computing the cosinesimilarity of two corresponding vectors. Moreover, to determine whether our modified iterative propagation is helpful or not, we also compare our full model (Ours) against a simplified version without iterative propagation (OursS). In Ours-S, we represent both ti and the triples extracted from D as real-valued vectors with our model introduced in Section 3.1. Then we compute wi by accumulating all the cosine-similarities between ti and the triples extracted from D. For all the baselines, we rank the triples of background knowledge according to wi, their relevance to D. Experimental setup Previous research relies on manual annotation to evaluate the ranking performance (Zhang et al., 2014), which costs a lot, and in which it is difficult to get high consistency. In this paper, we carry out an automatic evaluation. The corpus we used consists of 9 different classes, from which we extract triples of background knowledge. So correspondingly, there will be 9 sets of triples too. Then we randomly select 200 triples from every class and mix 200 × 9 = 1800 triples together as S, the set of triples of background knowledge. For every document D to be enriched, our model selects the top N mostrelevant triples from S and returns them to D as enrichments. We treat a triple ti selected by our model as positive only if ti is extracted from the same class as D. We evaluate the performance of our model with two well-known criteria in ranking problem: MAP and P@N (Voorhees et al., 2005). Statistically significant differences of performance are determined using the two-tailed paired t-test computed at a 95% confidence level based on the average performance per source document. 3https://code.google.com/p/word2vec/ 529 Model MAP 5 P@5 MAP 10 P@10 VSM 0.4968 0.3435 0.4752 0.3841 WE 0.4356 0.3354 0.4624 0.3841 LDA 0.6134 0.4775 0.6071 0.5295 Ours-S 0.5325 0.3762 0.5012 0.4054 Ours 0.6494 0.5597 0.6338 0.5502 Table 1: The performance evaluated as a ranking task. Here Ours corresponds to our full model, while Ours-S is a simplified version of our model without iterative propagation (see Section 3.2). Results The performance of multiple models is shown in Table 1. Overall, our full model Ours outperforms all the baseline systems significantly in every metric. When evaluating the top 10 triples with the highest relevance weight, our framework outperforms the best baseline LDA by 4.4% in MAP and by 3.91% in P@N. When evaluating the top 5 triples, our framework performs even better and significantly outperforms the best baseline by 5.87% in MAP and by 17.21% in P@N. To analyze the results further, Ours-S, the simplified version of our model without iterative propagation, outperforms two strong baselines VSM and WE, which indicates the effectiveness of encoding distributional semantics. However, the performance of this simplified model is not as good as that of LDA, because Ours-S evaluates the relevance with simple accumulation, which fails to capture the relatedness between multiple triples from the source document. We tackle this problem by incorporating the modified iterative propagation over the entire triple graph into Ours, which achieves the best performance. One possible problem is why WE has a poor performance, the reason of which lies in the setup of our evaluation, where we label positive and negative instances according to the class information of triples and documents. This is better fit for topic model–based methods. Discussion We further analyze the impact of the three modifications we made to the original model (see Section 3.2). We first focus on the impact of decreasing the relevance weight of bk-nodes and increasing that of sd-nodes after every iteration. As mentioned previously, we change their relevance weight according to a fixed ratio, which is important to the performance. Figure 4 shows the performance of models with different ratios. With any increase of the ratio, our model improves its performance in every metric, which shows the G G G G G G G G G G 1 2 3 4 5 6 7 8 9 10 0.35 0.45 0.55 0.65 Ratio (sd−nodes / bk−nodes) Value G MAP_5 P@5 MAP_10 P@10 Figure 4: The performance of our model with different ratios between sd-nodes and bk-nodes. effectiveness of this setup. The performance remains stable from the value of 10:1, which is thus chosen as the final value in our experiments. We then turn to the other two modifications about the edges between bk-nodes and the setup of propagation probability. Table 2 shows the performance of our full model and the simplified models without these two modifications. With the edges between bk-nodes, our model improves the performance by 1.48% in MAP 5 and by 1.82% in P@5. With the modified iterative propagation, we achieve a even greater improvement of 13.99% in MAP 5 and 24.27% in P@5. All these improvements are statistically significant, which indicates the effectiveness of these modifications to the original model. Model MAP 5 P@5 MAP 10 P@10 Full 0.6494 0.5597 0.6338 0.5502 Full−bb 0.6399 0.5497 0.6254 0.5404 Full−p 0.5697 0.4504 0.5485 0.4409 Table 2: The performance of our full model (Full) and two simplified models without modifications: (1) without edges between bk-nodes (Full−bb), (2) without the newly proposed definition of propagation probability between nodes (Full−p). 5.2 Task-based evaluation Data preparation To carry out the task-based evaluation, we use the same Chinese corpus as that in previous experiments, which consists of 17,199 documents extracted from Internet news in 9 topics. We also use the same tool (Che et al., 2010) to extract triples of both source document and background knowledge. For every document D to be classified, we first use our model to get the top N most-relevant triples to D, and then use them as extra features for the original model. We conduct a direct comparison between the models with and 530 Model P R F VSM+one-hot 0.8214 0.8146 0.8168 VSM+tf-idf 0.8381 0.8333 0.8336 LDA+SVM 0.8512 0.8422 0.8436 LDA+SVM+Ours-S 0.8584 0.8489 0.8501 LDA+SVM+Ours 0.8748 0.8689 0.8691 Table 3: The performance of document classification with (LDA+SVM+Ours-S, LDA+SVM+Ours) and without (others) background knowledge. without background knowledge to evaluate the impact of introducing background knowledge. Baseline systems We first illustrate two baselines without background knowledge based on VSM and LDA. For VSM, the test document D is represented as a bag of words, where the word distribution over candidate topics is trained on the same corpus mentioned previously. Then we evaluate the similarity between D and a candidate topic with cosine-similarity directly, where the topic with the highest similarity will be chosen as the final class. We use two setups: (1) VSMone-hot represents a word as 1 if it appears in a document or topic, or 0 if not. (2) VSM-tf-idf represents a word as the value of tf-idf. For LDA, we re-implement the state-of-the-art system as another baseline, which represents D as a topic vector ⃗vd in the parameter estimation step, and then introduces a SVM classifier to take⃗vd as input and decide the final class in the inference step. We also evaluate the impact of knowledge quality by proposing two different models to introduce background knowledge: our full model introduced in Section 3 (Ours), and a simplified version of our model without iterative propagation (Ours-S). They have different performances on introducing background knowledge as shown in previous experiments (see Section 5.1). We then conduct a direct comparison between the document classification models with these conditions, whose differing performances demonstrates the impact of different qualities of background knowledge on this task. Results Table 3 shows the results. We use P, R, F to evaluate the performance, which are computed as the micro-average over 9 topics. Both models with background knowledge (LDA+SVM+OursS, LDA+SVM+Ours) outperform systems without knowledge, which shows that the introduction of background knowledge helps in better classificaG G G G G G G G G G G 6.0 6.2 6.4 6.6 6.8 7.0 0.85 0.86 0.87 Threshold of relevance weight Value G P R F Figure 5: The performance of document classification models with different thresholds. The knowledge whose relevance weight to the source document exceeds the threshold will be introduced as background knowledge. tion of documents. The system with the simplified version of our model without iterative propagation (LDA+SVM+Ours-S) achieves a F-value of 0.8501, which outperforms the other baselines without knowledge too. Moreover, the system with our full model (LDA+SVM+Ours) achieves the best performance, a F-value of 0.8691, and outperforms the best baseline LDA+SVM significantly. This shows that introducing better quality of background knowledge is helpful to the better classification of documents. Statistical significance is also verified using the two-tailed paired t-test computed at a 95% confidence level based on the results of classification over the test set. Discussion One important question here is how much background knowledge to include. As mentioned in Section 4, we have tried two different solutions: (1) introducing a fixed amount of background knowledge for every document, and (2) setting a threshold and selecting knowledge whose relevance weight exceeds the threshold. The results are shown in Table 4, where the systems with threshold outperform that with fixed amount, which shows that the threshold helps in better introduction of background knowledge. Model P R F Ours-S+Top5 0.8522 0.8444 0.8456 Ours-S+ThreD 0.8584 0.8489 0.8501 Ours+Top5 0.8769 0.8667 0.8677 Ours+ThreD 0.8748 0.8689 0.8691 Table 4: The performance of document classification with the full model (Ours) and the simplified model (Ours-S) to introduce knowledge. 531 We also evaluate the impact of different thresholds as shown in Figure 5. The performance keeps improving as the threshold increases up to 6.4 and becomes steady from 6.4 to 6.7, while it begins to decline sharply from 6.7. This is reasonable because at the beginning, as the threshold increases, we recall more background knowledge and provide more information. However, with the further increase of the threshold, we introduce more noise, which decreases the performance. In our experiments, we choose 6.4 as the final threshold. 6 Conclusion and Future Work This study encodes distributional semantics into the triple-based background knowledge ranking model (Zhang et al., 2014) for better document enrichment. We first use LDA to represent every triple as a real-valued vector, which is used to evaluate the relatedness between triples, and then propose a modified iterative propagation model to rank all the triples of background knowledge. For evaluation, we conduct two series of experiments: (1) evaluation as ranking problem, and (2) taskbased evaluation, especially for document classification. In the first set of experiments, our model outperforms multiple strong baselines based on VSM, LDA, and WE. In the second set of experiments, our full model with background knowledge outperforms the state-of-the-art systems significantly. Moreover, we also explore the impact of knowledge quality and show its importance. In our future work, we wish to explore a better way to encode distributional semantics by proposing a modified LDA for better triples representation. In addition, we also want to explore the effect of introducing background knowledge in conjunction with other NLP tasks, especially with discourse parsing (Marcu, 2000; Pitler et al., 2009). Acknowledgments We would like to thank our colleagues for their great help. This work was partly supported by National Natural Science Foundation of China via grant 61133012, the National 863 Leading Technology Research Project via grant 2015AA015407, and the National Natural Science Foundation of China Surface Project via grant 61273321. References David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2007. Measuring semantic similarity between words using web search engines. Proceedings of the 16th International Conference on World Wide Web, 7:757–766. Volha Bryl, Claudio Giuliano, Luciano Serafini, and Kateryna Tymoshenko. 2010. Using background knowledge to support coreference resolution. In Proceedings of the 2010 Conference on ECAI 2010: 19th European Conference on Artificial Intelligence, volume 10, pages 759–764. Wanxiang Che, Zhenghua Li, and Ting Liu. 2010. Ltp: A Chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics: Demonstrations, pages 13–16. Association for Computational Linguistics. Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on Wikipedia data. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, volume 7, pages 708–716. Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. 2011. Open information extraction: The second generation. In Proceedings of the Twenty-Second international joint conference on Artificial IntelligenceVolume Volume One, pages 3–10. AAAI Press. Samah Fodeh, Bill Punch, and Pang-Ning Tan. 2011. On ontology-driven document clustering using core semantic features. Knowledge and Information Systems, 28(2):395–421. Xianpei Han, Le Sun, and Jun Zhao. 2011. Collective entity linking in web text: a graph-based method. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 765–774. ACM. Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013. Learning entity representation for entity disambiguation. Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), August. Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 782–792. Association for Computational Linguistics. 532 Johannes Hoffart, Fabian M Suchanek, Klaus Berberich, and Gerhard Weikum. 2013. Yago2: a spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelligence, 194:28–61. Xiaohua Hu, Xiaodan Zhang, Caimei Lu, Eun K Park, and Xiaohua Zhou. 2009. Exploiting Wikipedia as external knowledge for document clustering. In Proceedings of the 15th International Conference on Knowledge Discovery and Data Mining, pages 389– 396. ACM. Saurabh S Kataria, Krishnan S Kumar, Rajeev R Rastogi, Prithviraj Sen, and Srinivasan H Sengamedu. 2011. Entity disambiguation with hierarchical topic models. In Proceedings of the 17th International Conference on Knowledge Discovery and Data Mining, pages 1037–1045. ACM. Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in web text. In Proceedings of the 15th International Conference on Knowledge Discovery and Data Mining, pages 457–466. ACM. Yogan Jaya Kumar and Naomie Salim. 2012. Automatic multi document summarization approaches. Journal of Computer Science, 8(1). Christopher D Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to information retrieval, volume 1. Cambridge University Press Cambridge. Daniel Marcu. 2000. The rhetorical parsing of unrestricted texts: A surface-based approach. Computational Linguistics, 26(3):395–448. Vivi Nastase, Michael Strube, Benjamin B¨orschinger, C¨acilia Zirn, and Anas Elghafari. 2010. Wikinet: A very large scale multi-lingual concept network. In Proceeding of the 7th International Conference on Language Resources and Evaluation. Patrick Pantel and Ariel Fuxman. 2011. Jigs and lures: Associating web queries with structured entities. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 83–92. Association for Computational Linguistics. Xuan-Hieu Phan, Le-Minh Nguyen, and Susumu Horiguchi. 2008. Learning to classify short and sparse text & web with hidden topics from largescale data collections. In Proceedings of the 17th International Conference on World Wide Web, pages 91–100. ACM. Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 683–691. Association for Computational Linguistics. Gerard Salton and Michael J. McGill. 1986. Introduction to Modern Information Retrieval. McGrawHill, Inc., New York, NY, USA. Jean-Paul Sansonnet and Franc¸ois Bouchet. 2010. Extraction of agent psychological behaviors from glosses of WordNet personality adjectives. In Proc. of the 8th European Workshop on Multi-Agent Systems (EUMAS10). Fabrizio Sebastiani. 2002. Machine learning in automated text categorization. ACM Computing Surveys, 34(1):1–47, March. Prithviraj Sen. 2012. Collective context-aware topic models for entity disambiguation. In Proceedings of the 21st International Conference on World Wide Web, pages 729–738. ACM. Steffen Staab and Rudi Studer. 2009. Handbook on Ontologies. Springer Publishing Company, Incorporated, 2nd edition. Koun-Tem Sun, Yueh-Min Huang, and Ming-Chi Liu. 2011. A WordNet-based near-synonyms and similar-looking word learning system. Educational Technology & Society, 14(1):121–134. Ellen M Voorhees, Donna K Harman, et al. 2005. TREC: Experiment and evaluation in information retrieval, volume 63. MIT press Cambridge. Muyu Zhang, Bing Qin, Ting Liu, and Mao Zheng. 2014. Triple based background knowledge ranking for document enrichment. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 917–927, Dublin, Ireland, August. Dublin City University and Association for Computational Linguistics. Zhicheng Zheng, Fangtao Li, Minlie Huang, and Xiaoyan Zhu. 2010. Learning to link entities with knowledge base. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 483–491. Association for Computational Linguistics. 533
2015
51
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 534–542, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Strategic Reasoning Model for Generating Alternative Answers Jon Scott Stevens Center for General Linguistics, Berlin [email protected] Anton Benz Center for General Linguistics, Berlin Sebastian Reuße Ruhr University Bochum [email protected] Ralf Klabunde Ruhr University Bochum Abstract We characterize a class of indirect answers to yes/no questions, alternative answers, where information is given that is not directly asked about, but which might nonetheless address the underlying motivation for the question. We develop a model rooted in game theory that generates these answers via strategic reasoning about possible unobserved domain-level user requirements. We implement the model within an interactive question answering system simulating real estate dialogue. The system learns a prior probability distribution over possible user requirements by analyzing training dialogues, which it uses to make strategic decisions about answer selection. The system generates pragmatically natural and interpretable answers which make for more efficient interactions compared to a baseline. 1 Introduction In natural language dialogue, questions are often answered indirectly. This is particularly apparent for yes/no questions, where a wide range of responses beyond literal “yes” and “no” answers is available. Sometimes indirect answers serve to anticipate the next step of the hearer’s plan, as in (1) (Allen and Perrault, 1980), where the literal answer is entailed by the supplied answer, and sometimes indirect answers leave it to the hearer to infer the literal answer from common contextual assumptions, as in (2) (de Marneffe et al., 2009). (1) Q: Has the train to Windsor left yet? A: It’s leaving soon from gate 7. (2) Q: Is Sue at work? A: She’s sick with the flu. But other times there is no semantic link between the question and the supplied answer. Rather, the answer must be interpreted in light of the taskspecific goals of the interlocutors. Consider (3) in a context where a customer is posing questions to a real estate agent with the aim of renting an apartment. (3) Q: Does the apartment have a garden? A: Well, it has a large balcony. Whether there is a balcony has no logical bearing on whether there is a garden. Intuitively, the realtor is inferring that the customer’s question might have been motivated by a more general requirement (perhaps the customer wants a place to grow flowers) and supplying an alternative attribute to satisfy that requirement. In this case the answerer must reason about which attributes of an apartment might satisfy a customer who would ask about a garden. Note that multiple motivating requirements are possible (perhaps the customer just wants to relax outside), such that the answerer might just as easily have said, “It has a large balcony, and there is a park close by.” In either case, the hearer can infer from the lack of a direct answer that the apartment must not have a garden, because if it did, to say so would have been more obviously helpful. This paper focuses on this class of answers, which we call alternative answers. We characterize these as indirect answers to yes/no questions that offer attributes of an object under discussion which might satisfy an unobserved domain-level requirement of the questioner. We conceive of a requirement as a set of satisfying conditions, such that a particular domain-related need would be met by any one member of the set. For example, in the context of (3) we can encode a possible customer requirement of a place to grow flowers in an apartment, FLOWERS = {GARDEN, BALCONY}, such that either GARDEN or BALCONY would suffice to satisfy the requirement. 534 In order to generate alternative answers automatically, we must first solve two problems: (i) how does one learn and represent a space of likely user requirements?, and (ii) how does one use such a space to select indirect answers? To do this in a natural, pragmatically interpretable way, we must not only derive answers like in (3), but crucially, also rule out infelicitous responses like the following, where a logically possible alternative leads to incoherence due to the low probability of an appropriate requirement like {GARDEN, BASEMENT}. (In other words, wanting a garden has little effect on the probability of wanting a basement.) (4) Q: Does the apartment have a garden? A: #Well, it has a large basement. To solve these problems, we propose an approach rooted in decision-theoretic and game-theoretic analyses of indirectness in natural language (van Rooij, 2003; Benz and van Rooij, 2007; Benz et al., 2011; Stevens et al., 2014) whereby a system uses strategic reasoning to derive an optimal response to a yes/no question given certain domain assumptions. The model operates by assuming that both the questioner and the answerer are rational, i.e. that both participants want to further their own goals, and will behave so as to maximize the probability of success at doing so. One appeal of the strategic approach is its relative simplicity: the model utilizes a learned probability distribution over possible domain-level requirements of the questioner and applies simple probabilistic reasoning to feed content selection during online answer generation. Unlike plan inference approaches, we do not need to represent any complex taxonomies of stimulus conditions (Green and Carberry, 1994) or coherence relations (Green and Carberry, 1999; Asher and Lascarides, 2003). By implementing the strategic reasoning model within a simple interactive question answering system (Konstantinova and Orasan, 2012), simulating real estate dialogues with exchanges like in (3), we are able to evaluate the current approach quantitatively in terms of dialogue efficiency, perceived coherence of the supplied answers, and ability of users to draw natural pragmatic inferences. We conclude that strategic reasoning provides a promising framework for developing answer generation methods by starting with principled theoretical analyses of human dialogue. The following section presents the model, including a concrete content selection algorithm used for producing answers to questions, and then walks through a simple illustrative example. Section 3 describes our implementation, addresses the problem of learning requirement probabilities, and presents the results of our evaluation, providing quantitative support for our approach. Section 4 concludes with a general summary. 2 Model 2.1 Overview We derive our model beginning with a simple description of the discourse situation. In our case, this is an exchange of questions and answers where a user poses questions to be answered by an expert who has access to a database of information that the user wants. The expert has no advance knowledge of the database, and thus must look up information as needed. Each user question is motivated by a requirement, conceived of as a (possibly singleton) set of database attributes (restricted for current purposes to boolean attributes), any one of which satisfies a user need (e.g. {GARDEN, BALCONY} in the previous section). Only the user has direct access to her own requirements, and only the expert can query the database to inform the user whether her requirements can be satisfied. For current purposes we assume that each question and answer in the dialogue pertains to a specific object o from the database which is designated as the object under discussion. This way we can represent answers and question denotations with attributes, like GARDEN, where the queried/supplied attribute is assumed to predicate over o. In these terms, the expert can either ASSERT an attribute (if it holds of o) or DENY an attribute (if it does not hold of o) in response to a user query. Now we describe the goals of the interlocutors. The user wants her requirements to be satisfied, and will not accept an object until she is sure this is the case. If it is clear that an object cannot satisfy one or more requirements, the user will ask to discuss a different object from the database. We can thus characterize the set of possible user responses as follows: the user may ACCEPT the object as one that meets all requirements, the user may REJECT the object and ask to see something else, or the user may FOLLOW UP, continuing to pose questions about the current object. The user’s 535 goal, then, is ultimately to accept an object that in fact satisfies her requirements, and to reject any object that does not. The expert’s goal is to help the user find an optimal object as efficiently as possible. Given this goal, the expert does better to provide alternative attributes (like BALCONY for GARDEN in (3)) in place of simple “no” answers only when those attributes are relevant to the user’s underlying requirements. To use some economic terminology, we can define the benefit (B) of looking up a potential alternative attribute a in the database as a binary function indicating whether a is relevant to (i.e. a member of) the user requirement ρ which motivated the user’s question. For example, in (3), if the user’s question is motivated by requirement {GARDEN, BALCONY}, then the benefit of looking up whether there is a balcony is 1, because if that attribute turns out to hold of o, then the customer’s requirement is satisfied. If, on the other hand, the questioner has requirement {GARDEN}, then the benefit of looking up BALCONY is 0, because this attribute cannot satisfy this requirement. B(a|ρ) = 1 if a ∈ρ and 0 otherwise (1) Regardless of benefit, the expert incurs a cost by looking up information. To fully specify what cost means in this context, first assume a small, fixed effort cost associated with looking up an attribute. Further assume a larger cost incurred when the user has to ask a follow-up question to find out whether a requirement is satisfied. What really matters are not the raw cost amounts, which may be very small, but rather the relative cost of looking up an attribute compared to that of receiving a follow-up. We can represent the ratio of look-up cost to follow-up cost as a constant κ, which encodes the reluctance of the expert to look up new information. Intuitively, if κ is close to 1 (i.e. if follow-ups are not much more costly than simple look-ups), the expert should give mostly literal answers, and if κ is close to 0, (i.e. if relative followup cost is very high), the expert should look up all potentially beneficial attributes. With this, let the utility (U) of looking up a be the benefit of looking up a minus the relative cost. U(a|ρ) = B(a|ρ) −κ (2) The expert is utility-maximizing under gametheoretic assumptions, and (assuming a baseline utility of zero for doing nothing) should aim to look up attributes for which U is positive, i.e. for which benefit outweighs cost. But the expert has a problem: ρ, on which U depends, is known only to the user. Therefore, the best the expert can do is to reason probabilistically, based on the user’s question, to maximize expected utility, or the weighted average of U(a|ρ) for all possible values of ρ. The expected utility of looking up an attribute a can be written as the expected benefit of a—the weighted average of B(a|ρ) for all ρ—minus the relative cost. Let REQS be the set of all possible user requirements and let q be the user’s question. EU(a|q, REQS) = EB(a|q, REQS) −κ (3) EB(a|q, REQS) = X ρ∈REQS P(ρ|q) × B(a|ρ) (4) The probability of a user requirement P(ρ|q) is calculated via Bayes’ rule, assuming that users will choose their questions randomly from the set of questions whose denotations are in their requirement set. This yields the following. P(ρ|q) = P(q|ρ) × P(ρ) P ρ′∈REQS P(q|ρ′) × P(ρ′) (5) P(q|ρ) = 1 |ρ| if JqK ∈ρ and 0 otherwise (6) The prior probability of a user requirement, P(ρ), is given as input to the model. We will see in the next section that it is possible to learn a prior probability distribution from training dialogues. We have now fully characterized the expected benefit (EB) of looking up an attribute in the database. As per Eq.3, the expert should only bother looking up an attribute if EB is greater than the relative cost κ, since that is when EU is positive. The final step is to give the expert a sensible way to iteratively look up attributes to potentially produce multiple alternatives. To this end, we first point out that if an alternative has been found which satisfies a certain requirement, then it no longer adds any benefit to consider that requirement when selecting further alternatives. For example, in the context of example (3), when the realtor queries the database to find the apartment has a balcony, she no longer needs to consider the probability of a requirement {BALCONY, GARDEN} when considering additional attributes, since that is already satisfied. Given this consideration, the order in which database attributes are 536 looked up can make a difference to the outcome. So, we need a consistent and principled criterion for determining the order in which to look up attributes. The most efficient method is to start with the attribute with the highest possible EB value and then iteratively move down to the next best attribute until EB is less than or equal to cost. Note that the attribute that was asked about will always have an EB value of 1. Consider again the QA exchange in (3). Recall that the expert assumes that the user’s query is relevant to an underlying requirement ρ. This means that ρ must contain the attribute GARDEN. Therefore, by definition, supplying GARDEN will always yield positive benefit. We can use this fact to explain how alternative answers are interpreted by the user. The user knows that the most beneficial attribute to look up (in terms of EB) is the one asked about. If that attribute is not included in the answer, the user is safe to assume that it does not hold of the object under discussion. By reasoning about the expert’s reasoning, the user can derive the implicature that the literal answer to her question is “no”. In fact, this is what licenses the expert to leave the negation of the garden attribute out of the answer: the expert knows that the user knows that the expert would have included it if it were true. This type of “I know that you know” reasoning is characteristic of game-theoretic analysis.1 2.2 Algorithm and example Our algorithm for generating alternative answers (Algorithm 1), which simulates strategic reasoning by the expert in our dialogue situation, is couched in a simple information state update (ISU) framework (Larsson and Traum, 2000; Traum and Larsson, 2003), whereby the answerer keeps track of the current object under discussion (o) as well as a history of attributes looked up for o (HISTo). The output of the algorithm takes the form of a dialogue move, either an assertion (or set of assertions) or denial that an attribute holds of o. These dialogue moves can then be translated into natural language with simple sentence templates. The answerer uses HISTo to make sure redundant alternatives aren’t given across QA exchanges. If 1It can be shown that the answer selection algorithm presented in this section, combined with a simple user interpretation model, constitutes a perfect Bayesian equilibrium (Harsanyi, 1968; Fudenberg and Tirole, 1991) in a signaling game (Lewis, 1969) with private hearer types which formally describes this kind of dialogue. Requirement set P(ρ) P(q|ρ) P(ρ|q) ρG ={GARDEN} 0.5 1 0.67 ρF ={GARDEN, BALCONY} 0.25 0.5 0.17 ρP ={GARDEN, PARK} 0.2 0.5 0.13 ρS ={GARDEN, BASEMENT} 0.05 0.5 0.03 Table 1: A toy example of a customer requirement space with probabilities for q = ‘Does the apartment have a garden?’ all possible answers are redundant, the answerer falls back on a direct yes/no response. To illustrate how the algorithm works, consider a simple toy example. Table 1 gives a hypothetical space of possible requirements along with a distribution of priors, likelihoods and Bayesian posteriors. We imagine that a customer might want a garden (ρG), or more generally a place to grow flowers (ρF ), a place for their child to play outside (ρP ), or, in rare cases, either a garden or a basement to use as storage space (ρS). The rather odd nature of ρS is reflected in its low prior. Consider a variant of (3) where HISTo is empty, and where DBo contains BALCONY, PARK and BASEMENT. (5) Q: Does the apartment have a garden? A: It has a balcony, and there is a park very close by. To start, let REQS contain the requirements in Table 1, and let κ = 0.1. The algorithm derives the answer as follows. First, the algorithm looks up whether GARDEN holds of o. It does not hold, so GARDEN is not added to the answer; it is only added to the history of looked up attributes. a = GARDEN; EB(GARDEN) = 1; HISTo = {GARDEN} Then, the system finds the next best attribute, BALCONY, which does hold of o, appends it to the answer as well as the history, and removes the relevant requirement from consideration. a = BALCONY; EB(BALCONY) = 0.17; HISTo = {GARDEN, BALCONY}; ANSWER = {BALCONY}; REQS = {ρG, ρP , ρS} The attribute PARK is similarly added. a = PARK; EB(PARK) = 0.13; HISTo = {GARDEN, BALCONY, PARK}; ANSWER = {BALCONY, PARK}; REQS = {ρG, ρS} The attribute BASEMENT is next in line. However, its EB value is below the threshold of 0.1 due 537 Algorithm 1 An algorithm for generating alternative answers Input: A set of attributes Φ, an object under discussion o, a database DBo of attributes which hold of o, a history HISTo of attributes that have been looked up in the database, a set of possible user requirements REQS, a prior probability distribution over REQS, a user-supplied question q with denotation JqK and a relative cost threshold κ ∈(0, 1) Initialize: ANSWER = {}; LOOKUP = TRUE 1: while LOOKUP do 2: Φ′ = (Φ \ HISTo) ∪{JqK} ▷Only consider alternatives once per object per dialogue. 3: a = arg maxφ∈Φ′ EB(φ|q, REQS) ▷Find the best candidate answer. 4: if EB(a|q, REQS) > κ then ▷Check whether expected benefit outweighs cost. 5: HISTo = HISTo ∪{a} ▷Log which attribute has been looked up. 6: if a ∈DBo then 7: ANSWER = ANSWER ∪{a} ▷Add to answer if attribute holds. 8: REQS = REQS \ {ρ ∈REQS | ρ ∩ANSWER ̸= ∅} ▷Don’t consider requirements that are already satisfied. 9: end if 10: else 11: LOOKUP = FALSE ▷Stop querying the database when there are no promising candidates left. 12: end if 13: end while 14: if ANSWER ̸= ∅then ASSERT(ANSWER), 15: else DENY(JqK) 16: end if to its low prior probability, and thus the iteration stops there, and BASEMENT is never looked up. a = BASEMENT; EB(BASEMENT) = 0.03; EB < κ; exit loop 3 Implementation and evaluation 3.1 Setup A simple interactive question answering system was built using a modified version of the PyTrindiKit toolkit2 with a database back end implemented using an adapted version of PyKE, a Horn logic theorem prover.3 The system was set up to emulate the behavior of a real estate agent answering customers’ yes/no questions about a range of attributes pertaining to individual apartments. A set of 12 attributes was chosen for the current evaluation experiment. The system generates answers by first selecting a discourse move (i.e. assertion or denial of an attribute) and then translating the move into natural language with simple sentence templates like, “It has a(n) X” or “There is a(n) X nearby”. When answers are indirect (i.e. not asserting or denying the attribute asked about), the system begins its reply with the discourse connective “well” as in example (3).4 2https://code.google.com/p/ py-trindikit 3http://pyke.sourceforge.net/ 4Early feedback indicated that alternative answers were more natural when preceded by such a discourse connective. To assess this effect, we ran a separate evaluation experiment with an earlier version of the system that produced alternative answers without “well”. Dialogue lengths and coherence scores were not very different from what is reported in this Subjects interacted with our system by means of an online text-based interface accessible remotely through a web browser. At the outset of the experiment, subjects were told to behave as if they were finding an apartment for a hypothetical friend, and given a list of requirements for that friend. The task required them to identify which from among a sequence of presented apartments would satisfy the given set of requirements. One out of four lists, each containing three requirements (one of which was a singleton), was assigned to subjects at random. The requirements were constructed by the researchers to be plausible desiderata for users looking for a place to rent or buy (e.g. connection to public transit, which could be satisfied either by a nearby bus stop, or by a nearby train station). The apartments presented by the system were individually generated for each experiment such that there was an apartment satisfying one attribute for each possible combination of the three requirements issued to subjects, plus two additional apartments that each satisfied two of the conditions (23 + 2 = 10 apartments overall). Attributes outside a subject’s requirement sets were added at random to assess the effect of “unhelpful” alternative answers. Subject interacted with one of two answer generation models: a literal model, which only produced direct yes/no answers, and the strategic section; however, in contrast with the current evaluation, we found a large effect of model type (a 69% decrease for strategic vs. literal) on whether the subjects successfully completed the task (z=-2.19, p=0.03). This is consistent with the early feedback. 538 model as outlined above. Crucially, in both conditions, the sequence in which objects were presented was fixed so that the last apartment offered would be the sole object satisfying all of the desired criteria. Also, we set the strategic model’s κ parameter high enough (1/7) that only single-attribute answers were ever given. These two properties of the task, taken together, allow us to obtain an apples-to-apples comparison of the models with respect to average dialogue length. If subjects failed to accept the optimal solution, the interaction was terminated. After completing interaction with our system, subjects were asked to complete a short survey designed to get at the perceived coherence of the system’s answers. Subjects were asked to rate, on a seven-point Likert scale, the relevance of the system’s answers to the questions asked, overall helpfulness, the extent to which questions seemed to be left open, and the extent to which the system seemed evasive. We predict that the strategic system will improve overall efficiency of dialogue over that of the literal system by (i) offering helpful alternatives to satisfy the customer’s needs, and (ii) allowing customers to infer implicit “no” answers from alternative answers, leading to rejections of sub-optimal apartments. If, contrary to our hypothesis, subjects fail to draw inferences/implicatures from alternative answers, then we expect unhelpful alternatives (i.e. alternatives not in the user’s requirement set) to prompt repeated questions and/or failures to complete the task. With respect to the questionnaire items, the literal system is predicted to be judged maximally coherent, since only straightforward yes/no answers are offered. The question is whether the pragmatic system also allows for coherent dialogue. If subjects judge alternative answers to be incoherent, then we expect any difference in average Likert scale ratings between strategic and literal system to reflect the proportion of alternative answers that are given. 3.2 Learning prior probabilities Before presenting our results, we explain how prior probabilities can be learned within this framework. One of the assumptions of the strategic reasoning model is that users ask questions that are motivated by specific requirements. Moreover, we should assume that users employ a reasonable questioning strategy for finding out whether S: An apartment in the north of town might suit you. I have an additional offer for you there. U: Does the apartment have a garden? S: The apartment does not have a garden. U: Does the apartment have a balcony? S: The apartment does not have a balcony. U: I’d like to see something else Figure 1: An example of the negation-rejection sequence ⟨GARDEN, BALCONY⟩ requirements hold, which is tailored to the system they are interacting with. For example, if a user interacts with a system that only produces literal yes/no answers, the user should take all answers at face value, not drawing any pragmatic inferences. In such a scenario, we expect the user’s questioning strategy to be roughly as follows: for a1, a2, · · · , an in requirement ρ, ask about a1, then if a1 is asserted, accept (or move on to the next requirement if there are multiple requirements), and if not, ask about a2; if a2 is asserted, accept, and if not, ask about a3, and so on, until an is asked about. If an is denied, then reject the object under discussion. If you need a place to grow flowers, ask if there is a balcony or garden, then, if the answer is no, ask about the other attribute. If no “yes” answers are given, reject. Such a strategy predicts that potential user requirements should be able to be gleaned from dialogues with a literal system by analyzing negationrejection sequences (NRSs). A negation-rejection sequence is a maximal observed sequence of questions which all receive “no” answers, without any intervening “yes” answers or any other intervening dialogue moves, such that at the end of that sequence of questions, the user chooses to reject the current object under discussion. Such a sequence is illustrated in Fig.1. By hypothesis, the NRS ⟨GARDEN, BALCONY⟩indicates a possible user requirement {GARDEN, BALCONY}. By considering NRSs, the system can learn from training data a reasonable prior probability distribution over possible customer requirements. This obviates the need to pre-supply the system with complex world knowledge. If customer requirements can in principle be learned, then the strategic approach could be expanded to dialogue situations where the distribution of user requirements could not sensibly be pre-supplied. While the system in its current form is not guaranteed to scale up in this way, its success here provides us with a promising proof of concept. 539 Using the dialogues with the literal system as training data, we were able to gather frequencies of observed negation-rejection sequences. By transforming the sequences into unordered sets and then normalizing the frequencies of those sets, we obtained a prior probability distribution over possible customer requirements. In the training dialogues, subjects were given the same lists of requirements as was given for the evaluation of the strategic model. If successful, the system should use the yes/no dialogue data to learn high probabilities for requirements which customers actually had, and low probabilities for any others, allowing us to evaluate the system without giving it any prior clues as to which customer requirements were assigned. Because we know in advance which requirements the subjects wanted to fulfill, we have a gold standard against which we can compare the question-alternative answer pairs that different variants of the model are able to produce. For example, we know that if a subject asked whether the apartment had a balcony and received an answer about a nearby café, that answer could not have been beneficial, since no one was assigned the requirement {CAFÉ, BALCONY}. Table 2 compares three variant models: (i) the system we use in our evaluation, which sets prior probabilities proportional to NRS frequency, (ii) a system with flat priors, where probability is zero if NRS frequency is zero, but where all observed NRSs are taken to correspond to equiprobable requirements, and finally (iii) a baseline which does not utilize an EB threshold, but rather simply randomly selects alternatives which were observed at least once in an NRS with the queried attribute. These models are compared by the maximum benefit of their possible outputs using best-case values for κ. We see that there is a good match between the answers given by the strategic model with learned priors and the actual requirements that users were told to fulfill. Though it remains to be seen whether this would scale up to more complex requirement spaces, this result suggests that NRSs can in fact be indicative of disjunctive requirement sets, and can indeed be useful in learning what possible alternatives might be. For purposes of our evaluation, we will see that the method was successful. Model Precision Recall F1 Frequency-based 1 0.92 0.96 Flat 0.88 0.92 0.90 Baseline 0.23 1 0.37 Table 2: Comparison of best-case output with respect to potential benefit of alternative answer types to subjects. Precision = hits / hits+misses, and Recall = hits / possible hits. A “hit” is a QA pair which is a possible output of the model, such that A could be a beneficial answer to a customer asking Q, and a “miss” is such a QA pair such that A is irrelevant to Q. 3.3 Evaluation results We obtained data from a total of 115 subjects via Amazon Mechanical Turk; 65 subjects interacted with the literal comparison model, and 50 subjects interacted with the strategic model. We excluded a total of 13 outliers across both conditions who asked too few or too many questions (1.5 interquartile ranges below the 1st or above the 3rd quartile). These subjects either quit the task early or simply asked all available questions even for apartments that were obviously not a good fit for their requirements. Two subjects were excluded for not filling out the post-experiment questionnaire. This left 100 subjects (59 literal/41 strategic), of which 86 (49/37) successfully completed the task, accepting the object which met all assigned requirements. There was no statistically significant difference between the literal and strategic models with respect to task success. We first compare the literal and strategic models with regard to dialogue length, looking only at the subjects who successfully completed the task. Due to the highly structured nature of the experiment it was always the case that a successful dialogue consisted of 10 apartment proposals, some number of QA pairs, where each question was given a single answer, 9 rejections and, finally, one acceptance. This allows us to use the number of questions asked as a proxy for dialogue length. Figure 2 shows the comparison. The strategic model yields 27.4 questions on average, more than four fewer than the literal model’s 31.6. Standard statistical tests show the effect to be highly significant, with a one-way ANOVA yielding F=16.2, p = 0.0001, and a mixed effects regression model with a random slope for item (the items in this case being the set of requirements assigned to the sub540 0 10 20 30 Literal Strategic Model Number of questions Figure 2: Avg. number of QA pairs by model S: How about an apartment in the east of the city? I have an offer for you there. U: Does the apartment have a café nearby? S: Well, there is a restaurant nearby. U: I’d like to see something else Figure 3: A QA exchange from a dialogue where the user was instructed to find an apartment with a café nearby ject) yielding t=4, p=0.0001. We now ask whether the observed effect is due only to the presence of helpful alternatives which preclude the need for follow-up questions, or whether the ability of users to draw pragmatic inferences from unhelpful alternatives (i.e. alternatives that don’t actually satisfy the user’s requirement) also contributes to dialogue efficiency. Figure 3, taken from a real dialogue with our system, illustrates such an inference. The subject specifically wants a café nearby, and infers from the alternative answer that this requirement cannot be satisfied, and therefore rejects. The subject could have asked the question again to get a direct answer, which would have had a negative effect on dialogue efficiency, but this did not happen. We want to know if subjects’ aggregate behavior reflects this example. First, take the null hypothesis to be that subjects do not reliably draw such negative implicatures. In that case we would expect a certain proportion of questions to be repeated. Subjects are allowed to ask questions multiple times, and alternatives are never presented twice, such that repeating questions will ultimately lead to a direct yes/no answer. We do see some instances of this behavior in the 0.00 0.05 0.10 0.15 0.20 0.0 0.1 0.2 Unhelpful alternative answers / Total answers Repeated questions / Total questions Figure 4: Proportion unhelpful alternatives vs. proportion repeated questions dialogues. If this is indicative of an overall difficulty in drawing pragmatic inferences from an online dialogue system, then we expect the number of such repetitions to reflect the number of unhelpful alternatives that are offered. Instead, we find that when we plot a linear regression of repeated questions vs. unhelpful alternatives, we get a flat line with no observable correlation (Fig.4). Moreover, we also find no effect of unhelpful alternatives on whether the task was successfully completed. This suggests that the correct inferences are being drawn, as in Fig.3. We now look at the perceived coherence of the dialogues as assessed by our post-experiment questionnaire. We obtain a composite coherence score from all coherence-related items on the seven point Likert scale by summing all per-item scores for each subject and normalizing them to a unit interval, where 1 signifies the upper bound of perceived coherence. Although there is a difference in mean coherence score between the strategic and literal models, with the strategic model exhibiting 88% perceived coherence and the literal model 93%, the difference is not statistically significant. Moreover, we can rule out the possibility that the strategic model is judged to be coherent only when the number of alternative answers is low. To rule this out, we calculate the expected coherence score under the null hypothesis that coherence is directly proportional to the proportion of literal answers. Taking the literal model’s average score of 0.93 as a ceiling, we multiply this by the proportion of literal answers to obtain a 541 null hypothesis expected score of about 0.75 for the strategic model. This null hypothesis is disconfirmed (F=12.5, t=30.6, p<0.01). The strategic model is judged, by the criteria assessed by our post-experiment questionnaire, to be pragmatically coherent independently of the rate of indirect answers given. 4 Conclusion We have characterized the class of alternative answers to yes/no questions and proposed a content selection model for generating these answers in dialogue. The model is based on strategic reasoning about unobserved user requirements, and is based on work in game-theoretic pragmatics (Benz and van Rooij, 2007; Stevens et al., 2014). The model was implemented as an answer selection algorithm within an interactive question answering system in a real estate domain. We have presented an evaluation of this system against a baseline which produces only literal answers. The results show that the strategic reasoning approach leads to efficient dialogues, allows pragmatic inferences to be drawn, and does not dramatically reduce the overall perceived coherence or naturalness of the produced answers. Although the strategic model requires a form of world knowledge— knowledge of possible user requirements and their probabilities—we have shown that there is a simple method, the analysis of negation-rejection sequences in yes/no QA exchanges, that can be used to learn this knowledge with positive results. Further research is required to address issues of scalability and generalizability, but the current model represents a promising step in the direction of pragmatically competent dialogue systems with solid basis in formal pragmatic theory. Acknowledgments This work has been supported by the Deutsche Forschungsgemeinschaft (DFG) (Grant nrs. BE 4348/3-1 and KL 1109/6-1, project ‘Pragmatic Requirements for Answer Generation in a Sales Dialogue’), and by the Bundesministerium für Bildung und Forschung (BMBF) (Grant nr. 01UG0711). References James F. Allen and C. Raymond Perrault. 1980. Analyzing intention in utterances. Artificial Intelligence, 15(3):143–178. N. Asher and A. Lascarides. 2003. Logics of Conversation. Studies in Natural Language Processing. Cambridge University Press. Anton Benz and Robert van Rooij. 2007. Optimal assertions, and what they implicate. a uniform game theoretic approach. Topoi, 26(1):63–78. Anton Benz, Nuria Bertomeu, and Alexandra Strekalova. 2011. A decision-theoretic approach to finding optimal responses to over-constrained queries in a conceptual search space. In Proceedings of the 15th Workshop on the Semantics and Pragmatics of Dialogue, pages 37–46. Marie-Catherine de Marneffe, Scott Grimm, and Christopher Potts. 2009. Not a simple yes or no. In Proceedings of the SIGDIAL 2009 Conference: The 10th Annual Meeting of the Special Interest Group on Discourse and Dialogue, pages 136–143. Dan Fudenberg and Jean Tirole. 1991. Perfect Bayesian equilibrium and sequential equilibrium. Journal of Economic Theory, 53(2):236–260. Nancy Green and Sandra Carberry. 1994. Generating indirect answers to yes-no questions. In Proceedings of the Seventh International Workshop on Natural Language Generation, pages 189–198. Nancy Green and Sandra Carberry. 1999. Interpreting and generating indirect answers. Computational Linguistics, 25(3):389–435. John C. Harsanyi. 1968. Games of incomplete information played by ‘Bayesian’ players, part II. Management Science, 14(5):320–334. Natalia Konstantinova and Constantin Orasan. 2012. Interactive question answering. Emerging Applications of Natural Language Processing: Concepts and New Research, pages 149–169. Staffan Larsson and David R. Traum. 2000. Information state and dialogue management in the TRINDI dialogue move engine toolkit. Natural Language Engineering, 6(3&4):323–340. David Lewis. 1969. Convention: A Philosophical Study. Cambridge University Press, Cambridge. Jon Scott Stevens, Anton Benz, Sebastian Reuße, Ronja Laarmann-Quante, and Ralf Klabunde. 2014. Indirect answers as potential solutions to decision problems. In Proceedings of the 18th Workshop on the Semantics and Pragmatics of Dialogue, pages 145–153. David R. Traum and Staffan Larsson. 2003. The information state approach to dialogue management. In Jan van Kuppevelt and Ronnie W. Smith, editors, Current and new directions in discourse and dialogue, pages 325–353. Springer. Robert van Rooij. 2003. Questioning to resolve decision problems. Linguistics and Philosophy, 26(6):727–763. 542
2015
52
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 543–552, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Modeling Argument Strength in Student Essays Isaac Persing and Vincent Ng Human Language Technology Research Institute University of Texas at Dallas Richardson, TX 75083-0688 {persingq,vince}@hlt.utdallas.edu Abstract While recent years have seen a surge of interest in automated essay grading, including work on grading essays with respect to particular dimensions such as prompt adherence, coherence, and technical quality, there has been relatively little work on grading the essay dimension of argument strength, which is arguably the most important aspect of argumentative essays. We introduce a new corpus of argumentative student essays annotated with argument strength scores and propose a supervised, feature-rich approach to automatically scoring the essays along this dimension. Our approach significantly outperforms a baseline that relies solely on heuristically applied sentence argument function labels by up to 16.1%. 1 Introduction Automated essay scoring, the task of employing computer technology to evaluate and score written text, is one of the most important educational applications of natural language processing (NLP) (see Shermis and Burstein (2003) and Shermis et al. (2010) for an overview of the state of the art in this task). A major weakness of many existing scoring engines such as the Intelligent Essay AssessorTM(Landauer et al., 2003) is that they adopt a holistic scoring scheme, which summarizes the quality of an essay with a single score and thus provides very limited feedback to the writer. In particular, it is not clear which dimension of an essay (e.g., style, coherence, relevance) a score should be attributed to. Recent work addresses this problem by scoring a particular dimension of essay quality such as coherence (Miltsakaki and Kukich, 2004), technical errors, relevance to prompt (Higgins et al., 2004; Persing and Ng, 2014), organization (Persing et al., 2010), and thesis clarity (Persing and Ng, 2013). Essay grading software that provides feedback along multiple dimensions of essay quality such as E-rater/Criterion (Attali and Burstein, 2006) has also begun to emerge. Our goal in this paper is to develop a computational model for scoring the essay dimension of argument strength, which is arguably the most important aspect of argumentative essays. Argument strength refers to the strength of the argument an essay makes for its thesis. An essay with a high argument strength score presents a strong argument for its thesis and would convince most readers. While there has been work on designing argument schemes (e.g., Burstein et al. (2003), Song et al. (2014), Stab and Gurevych (2014a)) for annotating arguments manually (e.g., Song et al. (2014), Stab and Gurevych (2014b)) and automatically (e.g., Falakmasir et al. (2014), Song et al. (2014)) in student essays, little work has been done on scoring the argument strength of student essays. It is worth mentioning that some work has investigated the use of automatically determined argument labels for heuristic (Ong et al., 2014) and learning-based (Song et al., 2014) essay scoring, but their focus is holistic essay scoring, not argument strength essay scoring. In sum, our contributions in this paper are twofold. First, we develop a scoring model for the argument strength dimension on student essays using a feature-rich approach. Second, in order to stimulate further research on this task, we make our data set consisting of argument strength annotations of 1000 essays publicly available. Since progress in argument strength modeling is hindered in part by the lack of a publicly annotated corpus, we believe that our data set will be a valuable resource to the NLP community. 2 Corpus Information We use as our corpus the 4.5 million word International Corpus of Learner English (ICLE) (Granger 543 Topic Languages Essays Most university degrees are theoretical and do not prepare students for the real world. They are therefore of very little value. 13 131 The prison system is outdated. No civilized society should punish its criminals: it should rehabilitate them. 11 80 In his novel Animal Farm, George Orwell wrote “All men are equal but some are more equal than others.” How true is this today? 10 64 Table 1: Some examples of writing topics. et al., 2009), which consists of more than 6000 essays on a variety of different topics written by university undergraduates from 16 countries and 16 native languages who are learners of English as a Foreign Language. 91% of the ICLE texts are written in response to prompts that trigger argumentative essays. We select 10 such prompts, and from the subset of argumentative essays written in response to them, we select 1000 essays to annotate for training and testing of our essay argument strength scoring system. Table 1 shows three of the 10 topics selected for annotation. Fifteen native languages are represented in the set of annotated essays. 3 Corpus Annotation We ask human annotators to score each of the 1000 argumentative essays along the argument strength dimension. Our annotators were selected from over 30 applicants who were familiarized with the scoring rubric and given sample essays to score. The six who were most consistent with the expected scores were given additional essays to annotate. Annotators evaluated the argument strength of each essay using a numerical score from one to four at half-point increments (see Table 2 for a description of each score).1 This contrasts with previous work on essay scoring, where the corpus is annotated with a binary decision (i.e., good or bad) for a given scoring dimension (e.g., Higgins et al. (2004)). Hence, our annotation scheme not only provides a finer-grained distinction of argument strength (which can be important in practice), but also makes the prediction task more challenging. 1See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for the complete list of argument strength annotations. Score Description of Argument Strength 4 essay makes a strong argument for its thesis and would convince most readers 3 essay makes a decent argument for its thesis and could convince some readers 2 essay makes a weak argument for its thesis or sometimes even argues against it 1 essay does not make an argument or it is often unclear what the argument is Table 2: Descriptions of the meaning of scores. To ensure consistency in annotation, we randomly select 846 essays to have graded by multiple annotators. Though annotators exactly agree on the argument strength score of an essay only 26% of the time, the scores they apply fall within 0.5 points in 67% of essays and within 1.0 point in 89% of essays. For the sake of our experiments, whenever the two annotators disagree on an essay’s argument strength score, we assign the essay the average the two scores rounded down to the nearest half point. Table 3 shows the number of essays that receive each of the seven scores for argument strength. score 1.0 1.5 2.0 2.5 3.0 3.5 4.0 essays 2 21 116 342 372 132 15 Table 3: Distribution of argument strength scores. 4 Score Prediction We cast the task of predicting an essay’s argument strength score as a regression problem. Using regression captures the fact that some pairs of scores are more similar than others (e.g., an essay with an argument strength score of 2.5 is more similar to an essay with a score of 3.0 than it is to one with a score of 1.0). A classification system, by contrast, may sometimes believe that the scores 1.0 and 4.0 are most likely for a particular essay, even though these scores are at opposite ends of the score range. In the rest of this section, we describe how we train and apply our regressor. Training the regressor. Each essay in the training set is represented as an instance whose label is the essay’s gold score (one of the values shown in Table 3), with a set of baseline features (Section 5) and up to seven other feature types we propose (Section 6). After creating training instances, we train a linear regressor with regularization parameter c for scoring test essays using the linear SVM regressor implemented in the LIBSVM software package (Chang and Lin, 2001). All SVMspecific learning parameters are set to their default 544 values except c, which we tune to maximize performance on held-out validation data.2 Applying the regressor. After training the regressor, we use it to score the test set essays. Test instances are created in the same way as the training instances. The regressor may assign an essay any score in the range of 1.0−4.0. 5 Baseline Systems In this section, we describe two baseline systems for predicting essays’ argument strength scores. 5.1 Baseline 1: Most Frequent Baseline Since there is no existing system specifically for scoring argument strength, we begin by designing a simple baseline. When examining the score distribution shown in Table 3, we notice that, while there exist at least a few essays having each of the seven possible scores, the essays are most densely clustered around scores 2.5 and 3.0. A system that always predicts one of these two scores will very frequently be right. For this reason, we develop a most frequent baseline. Given a training set, Baseline 1 counts the number of essays assigned to each of the seven scores. From these counts, it determines which score is most frequent and assigns this most frequent score to each test essay. 5.2 Baseline 2: Learning-based Ong et al. Our second baseline is a learning-based version of Ong et al.’s (2014) system. Recall from the introduction that Ong et al. presented a rule-based approach to predict the holistic score of an argumentative essay. Their approach was composed of two steps. First, they constructed eight heuristic rules for automatically labeling each of the sentences in their corpus with exactly one of the following argument labels: OPPOSES, SUPPORTS, CITATION, CLAIM, HYPOTHESIS, CURRENT STUDY, or NONE. After that, they employed these sentence labels to construct five heuristic rules to holistically score a student essay. We create Baseline 2 as follows, employing the methods described in Section 4 for training, parameter tuning, and testing. We employ Ong et al.’s method to tag each sentence of our essays with an argument label, but modify their method to accommodate differences between their and our corpus. In particular, our more informal corpus 2For parameter tuning, we employ the following c values: 100 101, 102, 103, 104, 105, or 106. # Rule 1 Sentences that begin with a comparison discourse connective or contain any string prefixes from “conflict” or “oppose” are tagged OPPOSES. 2 Sentences that begin with a contingency connective are tagged SUPPORTS. 3 Sentences containing any string prefixes from “suggest”, “evidence”, “shows”, “Essentially”, or “indicate” are tagged CLAIM. 4 Sentences in the first, second, or last paragraph that contain string prefixes from “hypothes”, or “predict”, but do not contain string prefixes from “conflict” or “oppose” are tagged HYPOTHESIS. 5 Sentences containing the word “should” that contain no contingency connectives or string prefixes from “conflict” or “oppose” are also tagged HYPOTHESIS. 6 If the previous sentence was tagged hypothesis and this sentence begins with an expansion connective, it is also tagged HYPOTHESIS. 7 Do not apply a label to this sentence. Table 4: Sentence labeling rules. does not contain CURRENT STUDY or CITATION sentences, so we removed portions of rules that attempt to identify these labels (e.g. portions of rules that search for a four-digit number, as would appear as the year in a citation). Our resulting rule set is shown in Table 4. If more than one of these rules applies to a sentence, we tag it with the label from the earliest rule that applies. After labeling all the sentences in our corpus, we then convert three of their five heuristic scoring rules into features for training a regressor.3 The resulting three features describe (1) whether an essay contains at least one sentence labeled HYPOTHESIS, (2) whether it contains at least one sentence labeled OPPOSES, and (3) the sum of CLAIM sentences and SUPPORTS sentences divided by the number of paragraphs in the essay. If the value of the last feature exceeds 1, we instead assign it a value of 1. These features make sense because, for example, we would expect essays containing lots of SUPPORTS sentences to offer stronger arguments. 6 Our Approach Our approach augments the feature set available to Baseline 2 with seven types of novel features. 1. POS N-grams (POS) Word n-grams, though commonly used as features for training text classifiers, are typically not used in automated essay 3We do not apply the remaining two of their heuristic scoring rules because they deal solely with current studies and citations. 545 grading. The reason is that any list of word n-gram features automatically compiled from a given set of training essays would be contaminated with prompt-specific n-grams that may make the resulting regressor generalize less well to essays written for new prompts. To generalize our feature set in a way that does not risk introducing prompt-dependent features, we introduce POS n-gram features. Specifically, we construct one feature from each sequence of 1−5 part-of-speech tags appearing in our corpus. In order to obtain one of these features’ values for a particular essay, we automatically label each essay with POS tags using the Stanford CoreNLP system (Manning et al., 2014), then count the number of times the POS tag sequence occurs in the essay. An example of a useful feature of this type is “CC NN ,”, as it is able to capture when a student writes either “for instance,” or “for example,”. We normalize each essay’s set of POS n-gram features to unit length. 2. Semantic Frames (SFR) While POS n-grams provide syntactic generalizations of word n-grams, FrameNet-style semantic role labels provide semantic generalizations. For each essay in our data set, we employ SEMAFOR (Das et al., 2010) to identify each semantic frame occurring in the essay as well as each frame element that participates in it. For example, a semantic frame may describe an event that occurs in a sentence, and the event’s frame elements may be the people or objects that participate in the event. For a more concrete example, consider the sentence “I said that I do not believe that it is a good idea”. This sentence contains a Statement frame because a statement is made in it. One of the frame elements participating in the frame is the Speaker “I”. From this frame, we would extract a feature pairing the frame together with its frame element to get the feature “Statement-Speaker-I”. We would expect this feature to be useful for argument strength scoring because we noticed that essays that focus excessively on the writer’s personal opinions and experiences tended to receive lower argument strength scores. As with POS n-grams, we normalize each essay’s set of Semantic Frame features to unit length. 3. Transitional Phrases (TRP) We hypothesize that a more cohesive essay, being easier for a reader to follow, is more persuasive, and thus makes a stronger argument. For this reason, it would be worthwhile to introduce features that measure how cohesive an essay is. Consequently, we create features based on the 149 transitional phrases compiled by Study Guides and Strategies4. Study Guides and Strategies collected these transitions into lists of phrases that are useful for different tasks (e.g. a list of transitional phrases for restating points such as “in essence” or “in short”). There are 14 such lists, which we use to generalize transitional features. Particularly, we construct a feature for each of the 14 phrase type lists. For each essay, we assign the feature a value indicating the average number of transitions from the list that occur in the essay per sentence. Despite being phrase-based, transitional phrases features are designed to capture only prompt-independent information, which as previously mentioned is important in essay grading. 4. Coreference (COR) As mentioned in our discussion of transitional phrases, a strong argument must be cohesive so that the reader can understand what is being argued. While the transitional phrases already capture one aspect of this, they cannot capture when transitions are made via repeated mentions of the same entities in different sentences. We therefore introduce a set of 19 coreference features that capture information such as the fraction of an essay’s sentences that mention entities introduced in the prompt, and the average number of total mentions per sentence.5 Calculating these feature values, of course, requires that the text be annotated with coreference information. We automatically coreference-annotate the essays using the Stanford CoreNLP system. 5. Prompt Agreement (PRA) An essay’s prompt is always either a single statement, or can be split up into multiple statements with which a writer may AGREE STRONGLY, AGREE SOMEWHAT, be NEUTRAL, DISAGREE SOMEWHAT, DISAGREE STRONGLY, NOT ADDRESS, or explicitly have NO OPINION on. We believe information regarding which of these categories a writer’s opinion falls into has some bearing on the strength of her argument because, for example, a writer who explicitly mentions having no opinion has probably not made a persuasive argument. For this reason, we annotate a subset of 830 of our ICLE essays with these agreement labels. We then train a multiclass maximum entropy classifier 4http://www.studygs.net/wrtstr6.htm 5See our website at http://www.hlt.utdallas. edu/˜persingq/ICLE/ for a complete list of coreference features. 546 using MALLET (McCallum, 2002) for identifying which one of these seven categories an author’s opinion falls into. The feature set we use for this task includes POS n-gram and semantic frame features as described earlier in this section, lemmatized word 1-3 grams, the keyword and prompt adherence keyword features we described in Persing and Ng (2013) and Persing and Ng (2014), respectively, and a feature indicating which statement in the prompt we are attempting to classify the author’s agreement level with respect to. Our classifier’s training set in this case is the subset of prompt agreement annotated essays that fall within the training set of our 1000 essay argument strength annotated data. We then apply the trained classifier to our entire 1000 essay set in order to obtain predictions from which we can then construct features for argument strength scoring. For each prediction, we construct a feature indicating which of the seven classes the classifier believes is most likely, as well as seven additional features indicating the probability the classifier associates with each of the seven classes. We produce additional related annotations on this 830 essay set in cases when the annotated opinion was neither AGREE STRONGLY nor DISAGREE STRONGLY, as the reason the annotator chose one of the remaining five classes may sometimes offer insight into the writer’s argument. The classes of reasons we annotate include cases when the writer: (1) offered CONFLICTING OPINIONS, (2) EXPLICITLY STATED an agreement level, (3) gave only a PARTIAL RESPONSE to the prompt, (4) argued a SUBTLER POINT not capturable by extreme opinions, (5) did not make it clear that the WRITER’S POSITION matched the one she argued, (6) only BRIEFLY DISCUSSED the topic, (7) CONFUSINGLY PHRASED her argument, or (8) wrote something whose RELEVANCE to the topic was not clear. We believe that knowing which reason(s) apply to an argument may be useful for argument strength scoring because, for example, the CONFLICTING OPINIONS class indicates that the author wrote a confused argument, which probably deserves a lower argument strength score. We train eight binary maximum entropy classifiers, one for each of these reasons, using the same training data and feature set we use for agreement level prediction. We then use the trained classifiers to make predictions for these eight reasons on all 1000 essays. Finally, we generate features for our argument strength regressor from these predictions by constructing two features from each of the eight reasons. The first binary feature is turned on whenever the maximum entropy classifier believes that the reason applies (i.e., when it assigns the reason a probability of over 0.5). The second feature’s value is the probability the classifier assigns for this reason. 6. Argument Component Predictions (ACP) Many of our features thus far do not result from an attempt to build a deep understanding of the structure of the arguments within our essays. To introduce such an understanding into our system, we follow Stab and Gurevych (2014a), who collected and annotated a corpus of 90 persuasive essays (not from the ICLE corpus) with the understanding that the arguments contained therein consist of three types of argument components. In one essay, these argument components typically include a MAJOR CLAIM, several lesser CLAIMs which usually support or attack the major claim, and PREMISEs which usually underpin the validity of a claim or major claim. Stab and Gurevych (2014b) trained a system to identify these three types of argument components within their corpus given the components’ boundaries. Since our corpus does not contain annotated argument components, we modify their approach in order to simultaneously identify argument components and their boundaries. We begin by implementing a maximum entropy version of their system using MALLET for performing the argument component identification task. We feed our system the same structural and lexical features they described. We then augment the system in the following ways. First, since our corpus is not annotated with argument component boundaries, we construct a set of low precision, high recall heuristics for identifying the locations in each sentence where an argument component’s boundaries might occur. The majority of these rules depend primarily on a syntactic parse tree we automatically generated for the sentence using the Stanford CoreNLP system. Since a large majority of annotated argument components are substrings of a simple declarative clause (an S node in the parse tree), we begin by identifying each S node in the sentence’s tree. Given one of these clauses, we collect a list of left and right boundaries where an argument component may begin or end. The rules we used to 547 (a) Potential left boundary locations # Rule 1 Exactly where the S node begins. 2 After an initial explicit connective, or if the connective is immediately followed by a comma, after the comma. 3 After nth comma that is an immediate child of the S node. 4 After nth comma. (b) Potential right boundary locations # Rule 5 Exactly where the S node ends, or if S ends in a punctuation, immediately before the punctuation. 6 If the S node ends in a (possibly nested) SBAR node, immediately before the nth shallowest SBAR.6 7 If the S node ends in a (possibly nested) PP node, immediately before the nth shallowest PP. Table 5: Rules for extracting candidate argument component boundary locations. find these boundaries are summarized in Table 5. Given an S node, we use our rules to construct up to l × r argument component candidate instances to feed into our system by combining each left boundary with each right boundary that occurs after it, where l is the number of potential left boundaries our rules found, and r is the number of right boundaries they found. The second way we augment the system is by adding a boundary rule feature type. Whenever we generate an argument component candidate instance, we augment its normal feature set with two binary features indicating which heuristic rule was used to find the candidate’s left boundary, and which rule was used to find its right boundary. If two rules can be used to find the same left or right boundary position, the first rule listed in the table is the one used to create the boundary rule feature. This is why, for example, the table contains multiple rules that can find boundaries at comma locations. We would expect some types of commas (e.g., ones following an explicit connective) to be more significant than others. A last point that requires additional explanation is that several of the rules contain the word “nth”. This means that, for example, if a sentence contains multiple commas, we will generate multiple left boundary positions for it using rule 4, and the left boundary rule feature associated with each position will be different (e.g., there is a unique fea6The S node may end in an SBAR node which itself has an SBAR node as its last child, and so on. In this case, the S node could be said to end with any of these “nested” SBARS, so we use the position before each (nth) one as a right boundary. ture for the first comma, and for the the second comma, etc.). The last augmentation we make to the system is that we apply a NONE label to all argument component candidates whose boundaries do not exactly match those of a gold standard argument component. While Stab and Gurevych also did this, their list of such argument component candidates consisted solely of sentences containing no argument components at all. We could not do this, however, since our corpus is not annotated with argument components and we therefore do not know which sentences these would be. We train our system on all the instances we generated from the 90 essay corpus and apply it to label all the instances we generated in the same way from our 1000 essay ICLE corpus. As a result, we end up with a set of automatically generated argument component annotations on our 1000 essay corpus. We use these annotations to generate five additional features for our argument strength scoring SVM regressor. These features’ values are the number of major claims in the essay, the number of claims in the essay, the number of premises in the essay, the fraction of paragraphs that contain either a claim or a major claim, and the fraction of paragraphs that contain at least one argument component of any kind. 7. Argument Errors (ARE) We manually identified three common problems essays might have that tend to result in weaker arguments, and thus lower argument strength scores. We heuristically construct three features, one for each of these problems, to indicate to the learner when we believe an essay has one of these problems. It is difficult to make a reasonably strong argument in an essay that is too short. For this reason, we construct a feature that encodes whether the essay has 15 or fewer sentences, as only about 7% of our essays are this short. In the Stab and Gurevych corpus, only about 5% of paragraphs have no claims or major claims in them. We believe that an essay that contains too many of these claim or major claim-less paragraphs may have an argument that is badly structured, as it is typical for a paragraph to contain one or two (major) claim(s). For this reason, we construct a feature that encodes whether more than half of the essay’s paragraphs contain no claims or major claims, as indicated by the previously generated automatic annotations. 548 Similarly, only 5% of the Stab and Gurevych essays contain no argument components at all. We believe that an essay that contains too many of these component-less paragraphs is likely to have taken too much space discussing issues that are not relevant to the main argument of the essay. For this reason, we construct a feature that encodes whether more than one of the essay’s paragraphs contain no components, as indicated by the previously generated automatic annotations. 7 Evaluation In this section, we evaluate our system for argument strength scoring. All the results we report are obtained via five-fold cross-validation experiments. In each experiment, we use 60% of our labeled essays for model training, another 20% for parameter tuning and feature selection, and the final 20% for testing. These correspond to the training set, held-out validation data, and test set mentioned in Section 4. 7.1 Scoring Metrics We employ four evaluation metrics. As we will see below, S1, S2, and S3 are error metrics, so lower scores on them imply better performance. In contrast, PC is a correlation metric, so higher correlation implies better performance. The simplest metric, S1, measures the frequency at which a system predicts the wrong score out of the seven possible scores. Hence, a system that predicts the right score only 25% of the time would receive an S1 score of 0.75. The S2 metric measures the average distance between a system’s predicted score and the actual score. This metric reflects the idea that a system that predicts scores close to the annotator-assigned scores should be preferred over a system whose predictions are further off, even if both systems estimate the correct score at the same frequency. The S3 metric measures the average square of the distance between a system’s score predictions and the annotator-assigned scores. The intuition behind this metric is that not only should we prefer a system whose predictions are close to the annotator scores, but we should also prefer one whose predictions are not too frequently very far away from the annotated scores. The three error metric scores are given by: 1 N X Aj̸=E′ j 1, 1 N N X j=1 |Aj −Ej|, 1 N N X j=1 (Aj −Ej)2 System S1 S2 S3 PC Baseline 1 .668 .428 .321 .000 Baseline 2 .652 .418 .267 .061 Our System .618 .392 .244 .212 Table 6: Five-fold cross-validation results for argument strength scoring. where Aj, Ej, and E′ j are the annotator assigned, system predicted, and rounded system predicted scores7 respectively for essay j, and N is the number of essays. The last metric, PC, computes Pearson’s correlation coefficient between a system’s predicted scores and the annotator-assigned scores. PC ranges from −1 to 1. A positive (negative) PC implies that the two sets of predictions are positively (negatively) correlated. 7.2 Results and Discussion Five-fold cross-validation results on argument strength score prediction are shown in Table 6. The first two rows show our baseline systems’ performances. The best baseline system (Baseline 2), which recall is a learning-based version of Ong et al.’s (2014) system, predicts the wrong score 65.2% of the time. Its predictions are off by an average of .418 points, the average squared error of its predictions is .267, and its average Pearson correlation coefficient with the gold argument strength score across the five folds is .061. Results of our system are shown on the third row of Table 6. Rather than using all of the available features (i.e., Baseline 2’s features and the novel features described in Section 6), our system uses only the feature subset selected by the backward elimination feature selection algorithm (Blum and Langley, 1997) that achieves the best performance on the validation data (see Section 7.3 for details). As we can see, our system predicts the wrong score only 61.8% of the time, predicts scores that are off by an average of .392 points, the average squared error of its predictions is .244, and its average Pearson correlation coefficient with the gold scores is .212. These numbers correspond to relative error reductions8 of 5.2%, 7We round all predictions to 1.0 or 4.0 if they fall outside the 1.0−4.0 range and round S1 predictions to the nearest half point. 8These numbers are calculated B−O B−P where B is the baseline system’s score, O is our system’s score, and P is a perfect score. Perfect scores for error measures and PC are 0 and 1 respectively. 549 6.2%, 8.6%, and 16.1% over Baseline 2 for S1, S2, S3, and PC, respectively, the last three of which are significant improvements9. The magnitudes of these improvements suggest that, while our system yields improvements over the best baseline by all four measures, its greatest contribution is that its predicted scores are best-correlated with the gold standard argument strength scores. 7.3 Feature Ablation To gain insight into how much impact each of the feature types has on our system, we perform feature ablation experiments in which we remove the feature types from our system one-by-one. We show the results of the ablation experiments on the held-out validation data as measured by the four scoring metrics in Table 7. The top line of each subtable shows what a system that uses all available features’s score would be if we removed just one of the feature types. So to see how our system performs by the PC metric if we remove only prompt agreement (PRA) features, we would look at the first row of results of Table 7(d) under the column headed by PRA. The number here tells us that the resulting system’s PC score is .303. Since our system that uses all feature types obtains S1, S2, S3, and PC scores of .521, .366, .218, and .341 on the validation data respectively, the removal of PRA features costs the complete system .038 PC points, and thus we can infer that the inclusion of PRA features has a beneficial effect. From row 1 of Table 7(a), we can see that removing the Baseline 2 feature set (BAS) yields a system with the best S1 score in the presence of the remaining feature types in this row. For this reason, we permanently remove the BAS features from the system before we generate the results on line 2. We iteratively remove the feature type that yields a system with the best performance in this way until we get to the last line, where only one feature type is used to generate each result. Since the feature type whose removal yields the best system is always the rightmost entry in a line, the order of column headings indicates the relative importance of the feature types, with the leftmost feature types being most important to performance and the rightmost feature types being least important in the presence of the other feature types. The score corresponding to the best system is boldfaced for emphasis, indicating that all fea9All significance tests are paired t-tests with p < 0.05. (a) Results using the S1 metric SFR ACP TRP PRA POS COR ARE BAS .534 .594 .530 .524 .522 .532 .529 .521 .530 .554 .526 .529 .526 .528 .525 .534 .555 .525 .531 .528 .522 .543 .558 .536 .530 .527 .565 .561 .536 .529 .563 .547 .539 .592 .550 (b) Results using the S2 metric POS PRA ACP TRP BAS SFR COR ARE .370 .369 .375 .367 .367 .366 .366 .365 .369 .369 .375 .366 .366 .365 .365 .370 .371 .372 .367 .366 .365 .374 .374 .376 .368 .366 .377 .375 .374 .368 .381 .377 .376 .385 .382 (c) Results using the S3 metric POS PRA ACP TRP BAS COR ARE SFR .221 .220 .225 .219 .218 .217 .217 .211 .220 .219 .221 .214 .212 .211 .211 .218 .218 .220 .212 .211 .209 .221 .216 .218 .212 .210 .224 .217 .218 .212 .228 .220 .219 .229 .225 (d) Results using the PC metric POS ACP PRA TRP BAS ARE COR SFR .302 .270 .303 .326 .324 .347 .347 .356 .316 .300 .327 .344 .361 .366 .371 .346 .331 .341 .356 .367 .378 .325 .331 .345 .362 .375 .297 .331 .339 .360 .280 .320 .321 .281 .281 Table 7: Feature ablation results. In each subtable, the first row shows how our system would perform on the validation set essays if each feature type was removed. We then remove the least important feature type, and show in the next row how the adjusted system would perform without each remaining type. ture types appearing to its left are included in the best system.10 It is interesting to note that while the relative importance of different feature types does not remain exactly the same if we measure performance in different ways, we can see that some feature types tend to be more important than others in a majority of the four scoring metrics. From these tables, it is clear that POS n-grams 10The reason the performances shown in these tables appear so much better than those shown previously is that in these tables we tune parameters and display results on the validation set in order to make it clearer why we chose to remove each feature type. In Table 6, by contrast, we tune parameters on the validation set, but display results using those parameters on the test set. 550 S1 S2 S3 PC Gold .25 .50 .75 .25 .50 .75 .25 .50 .75 .25 .50 .75 1.0 2.90 2.90 2.90 2.74 2.74 2.74 2.74 2.74 2.74 2.74 2.74 2.74 1.5 2.69 2.78 2.89 2.36 2.67 2.78 2.52 2.63 2.71 2.52 2.63 2.81 2.0 2.61 2.72 2.85 2.54 2.69 2.79 2.60 2.69 2.78 2.60 2.70 2.80 2.5 2.64 2.71 2.85 2.65 2.75 2.86 2.66 2.75 2.85 2.69 2.79 2.89 3.0 2.73 2.84 2.92 2.71 2.81 2.91 2.70 2.80 2.90 2.72 2.83 2.90 3.5 2.74 2.85 2.97 2.78 2.89 3.02 2.79 2.90 3.00 2.81 2.90 2.98 4.0 2.75 2.87 3.10 2.76 2.85 3.09 2.76 2.83 3.08 2.81 2.86 3.19 Table 8: Distribution of regressor scores for our system. (POS), prompt agreement features (PRA), and argument component predictions (ACP) are the most generally important feature types in roughly that order. They all appear in the leftmost three positions under the tables for metrics S2, S3, and PC, the three metrics by which our system significantly outperforms Baseline 2. Furthermore, removing any of them tends to have a larger negative impact on our system than removing any of the other feature types. Transitional phrase features (TRP) and Baseline 2 features (BAS), by contrast, are of more middling importance. While both appear in the best feature sets for the aforementioned metrics (i.e., they appear to the left of the boldfaced entry in the corresponding ablation tables), the impact of their removal is relatively less than that of POS, PRA, or ACP features. Finally, while the remaining three feature types might at first glance seem unimportant to argument strength scoring, it is useful to note that they all appear in the best performing feature set as measured by at least one of the four scoring metrics. Indeed, semantic frame features (SFR) appear to be the most important feature type as measured by the S1 metric, despite being one of the least useful feature types as measured by the other performance metrics. From this we learn that when designing an argument strength scoring system, it is important to understand what the ultimate goal is, as the choice of performance metric can have a large impact on what type of system will seem ideal. 7.4 Analysis of Predicted Scores To more closely examine the behavior of our system, in Table 8 we chart the distributions of scores it predicts for essays having each gold standard score. As an example of how to read this table, consider the number 2.60 appearing in row 2.0 in the .25 column of the S3 region. This means that 25% of the time, when our system with parameters tuned for optimizing S3 (including the S3 feature set as selected in Table 7(c)) is presented with a test essay having a gold standard score of 2.0, it predicts that the essay has a score less than or equal to 2.60. From this table, we see that our system has a bias toward predicting more frequent scores as the smallest entry in the table is 2.36 and the largest entry is 3.19, and as we saw in Table 3, 71.4% of essays have gold scores in this range. Nevertheless, our system does not rely entirely on bias, as evidenced by the fact that each column in the table has a tendency for its scores to ascend as the gold standard score increases, implying that our system has some success at predicting lower scores for essays with lower gold standard argument strength scores and higher scores for essays with higher gold standard argument strength scores. The major exception to this rule is line 1.0, but this is to be expected since there are only two essays having this gold score, so the sample from which the numbers on this line are calculated is very small. 8 Conclusion We proposed a feature-rich approach to the new problem of predicting argument strength scores on student essays. In an evaluation on 1000 argumentative essays selected from the ICLE corpus, our system significantly outperformed a baseline system that relies solely on features built from heuristically labeled sentence argument function labels by up to 16.1%. To stimulate further research on this task, we make all of our annotations publicly available. Acknowledgments We thank the three anonymous reviewers for their detailed comments. This work was supported in part by NSF Grants IIS-1147644 and IIS-1219142. Any opinions, findings, conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reflect the views or official policies, either expressed or implied, of NSF. 551 References Yigal Attali and Jill Burstein. 2006. Automated essay scoring with E-rater v.2.0. Journal of Technology, Learning, and Assessment, 4(3). Avrim Blum and Pat Langley. 1997. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1–2):245–271. Jill Burstein, Daniel Marcu, and Kevin Knight. 2003. Finding the WRITE stuff: Automatic identification of discourse structure in student essays. IEEE Intelligent Systems, 18(1):32–39. Chih-Chung Chang and Chih-Jen Lin, 2001. LIBSVM: A library for support vector machines. Software available at http://www.csie.ntu. edu.tw/˜cjlin/libsvm. Dipanjan Das, Nathan Schneider, Desai Chen, and Noah A. Smith. 2010. Probabilistic frame-semantic parsing. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 948–956. Mohammad Hassan Falakmasir, Kevin D. Ashley, Christian D. Schunn, and Diane J. Litman. 2014. Identifying thesis and conclusion statements in student essays to scaffold peer review. In Intelligent Tutoring Systems, pages 254–259. Springer International Publishing. Sylviane Granger, Estelle Dagneaux, Fanny Meunier, and Magali Paquot. 2009. International Corpus of Learner English (Version 2). Presses universitaires de Louvain. Derrick Higgins, Jill Burstein, Daniel Marcu, and Claudia Gentile. 2004. Evaluating multiple aspects of coherence in student essays. In Human Language Technologies: The 2004 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 185–192. Thomas K. Landauer, Darrell Laham, and Peter W. Foltz. 2003. Automated scoring and annotation of essays with the Intelligent Essay AssessorTM. In Automated Essay Scoring: A Cross-Disciplinary Perspective, pages 87–112. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations, pages 55–60. Andrew Kachites McCallum. 2002. MALLET: A Machine Learning for Language Toolkit. http: //mallet.cs.umass.edu. Eleni Miltsakaki and Karen Kukich. 2004. Evaluation of text coherence for electronic essay scoring systems. Natural Language Engineering, 10(1):25–55. Nathan Ong, Diane Litman, and Alexandra Brusilovsky. 2014. Ontology-based argument mining and automatic essay scoring. In Proceedings of the First Workshop on Argumentation Mining, pages 24–28. Isaac Persing and Vincent Ng. 2013. Modeling thesis clarity in student essays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 260–269. Isaac Persing and Vincent Ng. 2014. Modeling prompt adherence in student essays. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1534–1543. Isaac Persing, Alan Davis, and Vincent Ng. 2010. Modeling organization in student essays. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 229– 239. Mark D. Shermis and Jill C. Burstein. 2003. Automated Essay Scoring: A Cross-Disciplinary Perspective. Lawrence Erlbaum Associates, Inc., Mahwah, NJ. Mark D. Shermis, Jill Burstein, Derrick Higgins, and Klaus Zechner. 2010. Automated essay scoring: Writing assessment and instruction. In International Encyclopedia of Education (3rd edition). Elsevier, Oxford, UK. Yi Song, Michael Heilman, Beata Beigman Klebanov, and Paul Deane. 2014. Applying argumentation schemes for essay scoring. In Proceedings of the First Workshop on Argumentation Mining, pages 69–78. Christian Stab and Iryna Gurevych. 2014a. Annotating argument components and relations in persuasive essays. In Proceedings of the 25th International Conference on Computational Linguistics: Technical Papers, pages 1501–1510. Christian Stab and Iryna Gurevych. 2014b. Identifying argumentative discourse structures in persuasive essays. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 46–56. 552
2015
53
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 553–563, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Summarization of Multi-Document Topic Hierarchies using Submodular Mixtures Ramakrishna B Bairi IITB-Monash Research Academy IIT Bombay Mumbai, 40076, India [email protected] Rishabh Iyer University of Washington Seattle, WA-98175, USA [email protected] Ganesh Ramakrishnan IIT Bombay Mumbai, 40076, India [email protected] Jeff Bilmes University of Washington Seattle, WA-98175, USA [email protected] Abstract We study the problem of summarizing DAG-structured topic hierarchies over a given set of documents. Example applications include automatically generating Wikipedia disambiguation pages for a set of articles, and generating candidate multi-labels for preparing machine learning datasets (e.g., for text classification, functional genomics, and image classification). Unlike previous work, which focuses on clustering the set of documents using the topic hierarchy as features, we directly pose the problem as a submodular optimization problem on a topic hierarchy using the documents as features. Desirable properties of the chosen topics include document coverage, specificity, topic diversity, and topic homogeneity, each of which, we show, is naturally modeled by a submodular function. Other information, provided say by unsupervised approaches such as LDA and its variants, can also be utilized by defining a submodular function that expresses coherence between the chosen topics and this information. We use a large-margin framework to learn convex mixtures over the set of submodular components. We empirically evaluate our method on the problem of automatically generating Wikipedia disambiguation pages using human generated clusterings as ground truth. We find that our framework improves upon several baselines according to a variety of standard evaluation metrics including the Jaccard Index, F1 score and NMI, and moreover, can be scaled to extremely large scale problems. 1 Introduction Several real world machine learning applications involve hierarchy based categorization of topics for a set of objects. Objects could be, e.g., a set of documents for text classification, a set of genes in functional genomics, or a set of images in computer vision. One can often define a natural topic hierarchy to categorize these objects. For example, in text and image classification problems, each document or image is assigned a hierarchy of labels — a baseball page is assigned the labels “baseball” and “sports.” Moreover, many of these applications, naturally have an existing topic hierarchy generated on the entire set of objects (Rousu et al., 2006; Barutcuoglu et al., 2006; ling Zhang and hua Zhou, 2007; Silla and Freitas, 2011; Tsoumakas et al., 2010). Given a DAG-structured topic hierarchy and a subset of objects, we investigate the problem of finding a subset of DAG-structured topics that are induced by that subset (of objects). This problem arises naturally in several real world applications. For example, consider the problem of identifying appropriate label sets for a collection of articles. Several existing text collection datasets such as 20 Newsgroup1, Reuters-215782 work with a predefined set of topics. We observe that these topic names are highly abstract3 for the articles categorized under them. On the other hand, techniques proposed by systems such as Wikipedia Miner (Milne, 2009) and TAGME (Ferragina and Scaiella, 2010) generate several labels for each article in the dataset that are highly specific to the article. Collating all labels from all articles to create a label 1http://qwone.com/˜jason/20Newsgroups/ 2http://www.daviddlewis.com/resources/ testcollections/reuters21578/ 3Topic Concept is more abstract than the topic Science which is more abstract than the topicChemistry 553 ... … … ... … … ... … … ... … … Populated place Malus(Eudicot genera, Plants and Pollinators,… ) Cashew Apple (Edible nuts,Trees of Brazil,…) Hedge Apple (Trees of US, Maclura,..) Apple Corps(Companies of UK, Companies establisted in 1968,…) Apple Inc(Companies in California, Companies establisted in 1996, Hardware Companies,…) Apple Bank (Banks in New Your, Banks of USA,…) The Apple (1980 films, English language films,…) Apple Albums (1990 debut Albums, English language albums, Mercury records,…) Apple Band (English rock music groups, Musical groups from London,…) Apple Records (Scotish music groups) Apple Oklahoma (Unincorporated communities) Apple River (Villages in Illions) Apple Valley (Cities in California) Apple Store (Electronic companies of Us, Video game retailers,…) HP Apple (HP microprocessors, HP calculators) Apple Daily (Next media, Publications established in 1995) Apple Novel (2007 novels, Novels of England, debut novel) Apple Key (Mac OS, Computer keys) Apple Card Game (Point trick games) Companies Films Places Technology Music Root Plants Plants and Pollinators Edible Nuts Trees of Brazil Companies of UK Banks in New York 1980 films Apple Hardware HP Microprocessors Tropical Trees Computer Hardware Companies by year Films by country Operating Systems HP Products Publications Books Apple Computer (Apple hardware, Microelectronics,…) ... … … ... … … Trees by country Trees of US Retail Companies Companies ofCalifornia Cities in California Albums by language English Albums ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … ... … … Documents associated with fine-grained (near leaf level) topics Novels Technology Apple computer Apple Key Apple Store HP Apple Plants Malus Cashew Apple Hedge Apple Companies Apple corps Apple Inc. Apple bank Places Apple Oklahoma Apple River Apple Valley Music Films The Apple Other Apple Card game Apple Daily Apple Novel Apple Albums Apple Band Apple Records Input documents on 'Apple' with fine grained (near leaf level) topic assignment Topic DAG Output Disambiguation page for 'Apple' with documents grouped under summary topics ... … … Topic ... … … Summary Topic Parent-Child relation Ancestor-Descendant relation Topic-Object association Documents not grouped under any summary topic Document Name Fine-grained topics Figure 1: Topic Summarization overview. On the left, we show many documents related to Apple. In the middle, a Wikipedia category hierarchy shown as a topic DAG, links these documents at the leaf level. On the right, we show the output of our summarization process, which creates a set of summary topics (Plants, Technology, Companies, Films, Music and Places in this example) with the input documents classified under them. set for the dataset can result in a large number of labels and become unmanageable. Our proposed techniques can summarize such large sets of labels into a smaller and more meaningful label sets using a DAG-structured topic hierarchy. This also holds for image classification problems and datasets like ImageNet (Deng et al., 2009). We use the term summarize to highlight the fact that the smaller label set semantically covers the larger label set. For example, the topics Physics, Chemistry, and Mathematics can be summarized into a topic Science. A particularly important application of our work (and the one we use for our evaluations in Section 4) is the following: Given a collection of articles spanning different topics, but with similar titles, automatically generate a disambiguation page for those titles using the Wikipedia category hierarchy4 as a topic DAG. Disambiguation pages5 on Wikipedia are used to resolve conflicts in article titles that occur when a title is naturally associated with multiple articles on distinct topics. Each disambiguation page organizes articles into several groups, where the articles in each group pertain only to a specific topic. Disambiguations may be seen as paths in a hierarchy leading to different articles that arguably could have the same title. For example, the title Apple6 can refer to a plant, a company, a film, a 4http://en.wikipedia.org/wiki/Help:Categories 5http://en.wikipedia.org/wiki/Wikipedia:Disambiguation 6http://en.wikipedia.org/wiki/Apple_ (disambiguation) television show, a place, a technology, an album, a record label, and a newspaper daily. The problem then, is to organize the articles into multiple groups where each group contains articles of similar nature (topics) and has an appropriately discerned group heading. Figure 1 describes the topic summarization process for creation of the disambiguation page for “Apple”. All the above mentioned problems can be modeled as the problem of finding the most representative subset of topic nodes from a DAG-Structured topic hierarchy. We argue that many formulations of this problem are natural instances of submodular maximization, and provide a learning framework to create submodular mixtures to solve this problem. A set function f (.) is said to be submodular if for any element v and sets A ⊆B ⊆V \ {v}, where V represents the ground set of elements, f (A ∪{v})−f (A) ≥f (B ∪{v})−f (B). This is called the diminishing returns property and states, informally, that adding an element to a smaller set increases the function value more than adding that element to a larger set. Submodular functions naturally model notions of coverage and diversity in applications, and therefore, a number of machine learning problems can be modeled as forms of submodular optimization (Kempe et al., 2003; Krause and Guestrin, 2005; Narasimhan and Bilmes, 2004; Iyer et al., 2013; Lin and Bilmes, 2012; Lin and Bilmes, 2010). In this paper, we investigate structured prediction methods for learn554 ing weighted mixtures of submodular functions to summarize topics for a collection of objects using DAG-structured topic hierarchies. Throughout this paper we use the terms “topic” and “category” interchangeably. 1.1 Related Work To the best of our knowledge, the specific problem we consider here is new. Previous work on identifying topics can be broadly categorized into one of the following types: a) cluster the objects and then identify names for the clusters; or b) dynamically identify topics (including hierarchical) for a set of objects. LDA (Blei et al., 2003) clusters the documents and simultaneously produces a set of topics into which the documents are clustered. In LDA, each document may be viewed as a mixture of various topics and the topic distribution is assumed to have a Dirichlet prior. LDA associates a group of high probability words to each identified topic. A name can be assigned to a topic by manually inspecting the words or using additional algorithms like (Mei et al., 2007; Maiya et al., 2013). LDA does not make use of existing topic hierarchies and correlation between topics. The Correlated Topic Model (Blei and Lafferty, 2006) induces a correlation structure between topics by using the logistic normal distribution instead of the Dirichlet. Another extension is the hierarchical LDA (Blei et al., 2004), where topics are joined together in a hierarchy by using the nested Chinese restaurant process. Nonparametric extensions of LDA include the Hierarchical Dirichlet Process (Teh et al., 2006) mixture model, which allows the number of topics to be unbounded and learnt from data and the Nested Chinese Restaurant Process which allows topics to be arranged in a hierarchy whose structure is learnt from data. In each of these approaches, unlike our proposed approach, an existing topic hierarchy is not used, nor is any additional objecttopic information leveraged. The pachinko allocation model (PAM)(Li and McCallum, 2006) captures arbitrary, nested, and possibly sparse correlations between topics using a DAG. The leaves of the DAG represent individual words in the vocabulary, while each interior node represents a correlation among its children, which may be words or other interior nodes (topics). PAM learns the probability distributions of words in a topic, subtopics in a topic, and topics in a document. We cannot, however, generate a subset of topics from a large existing topic DAG that can act as summary topics, using PAM. HSLDA (Perotte et al., 2011) introduces a hierarchically supervised LDA model to infer hierarchical labels for a document. It assumes an existing label hierarchy in the form of a tree. The model infers one or more labels such that, if a label l is inferred as relevant to a document, then all the labels from l to the root of the tree are also inferred as relevant to the document. Our approach differs from HSLDA since: (1) we use the label hierarchy to infer a set of labels for a group of documents; (2) we do not enforce the label hierarchy to be a tree as it can be a DAG; and (3) generalizing HSLDA to use a DAG structured hierarchy and infer labels for a group of documents (e.g., combining into one big document) also may not help in solving our problem. HSLDA will apply all the relevant labels to the documents as per the classifier that it learns for every label. Moreover, the “root” label is always applied and it is very likely that many labels near the top level of the label hierarchy are also classified as relevant to the group of documents. Wei and James (Bi and Kwok, 2011) present a hierarchical multi-label classification algorithm that can be used on both tree and DAG structured hierarchies. They formulate a search for the optimal consistent multi-label as the finding of the best subgraph in a tree/DAG. In our approach, we assume, individual documents are already associated with one or more topics and we find a consistent label set for a group of documents using the DAG structured topic hierarchy. Medelyan et al. (Medelyan et al., 2008) and Ferragina et al. (Ferragina and Scaiella, 2010) detect topics for a document using Wikipedia article names and category names as the topic vocabulary. These systems are able to extract signals from a text document and identify Wikipedia articles and/or categories that optimally match the document and assign those article/category names as topics for the document. When run on a large collection of documents, these approaches generate enormous numbers of topics, a problem our proposed approach addresses. 1.2 Our Contributions While most prior work discussed above focuses on the underlying set of documents, (e.g., by clustering documents), we focus directly on the topics. In particular, we formulate the problem as subset selection on the set of topics within a DAG while simultaneously considering the documents to be categorized. Our method can scale to the colossal size of the DAG (1 million topics and 3 million correlation links between topics in Wikipedia). Moreover, our approach can 555 naturally incorporate outputs from many of the aforementioned algorithms. Our approach is based on submodular maximization and mixture learning, which has been successfully used in applications such as document summarization (Lin, 2012) and image summarization (Tschiatschek et al., 2014), but has never been applied to topic identification tasks or, more generally, DAG summarization. We introduce a family of submodular functions to identify an appropriate set of topics from a DAG structured hierarchy of topics for a group of documents. We characterize this topic appropriateness through a set of desirable properties such as coverage, diversity, specificity, clarity, and relevance. Each of the submodular function components we consider are monotone, thereby ensuring a near optimal performance obtainable via a simple greedy algorithm for optimization.7. We also show how our technique naturally embodies outputs of other algorithms such as LDA, clustering, and classifications. Finally, we utilize a large margin formulation for learning mixtures of these submodular functions, and show how we can optimally learn them from training data. Our approach demonstrates how to utilize the features collectively in the document space and the topic space to infer a set of topics. From an empirical perspective, we introduce and evaluate our approach on a dataset of around 8000 disambiguations that was extracted from Wikipedia and subsequently cleaned using the methods described in the experimentation section. We show that our learning framework outperforms many of the baselines, and is practical enough to be used on large corpora. 2 Problem Formulation Let G (V, E) be the DAG structured topic hierarchy with V topics. These topics are observed to have a parent child (isa) relationship forming a DAG. Let D be the set of documents that are associated with one or more of these topics. The middle portion of Figure 1 depicts a topic hierarchy with associated documents. The association links between the documents and topics can be hard or soft. In case of a hard link, a document is attached to a set of topics. Examples include multi-labeled documents. In case of a soft link, a document is associated with a topic with some degree of confidence (or probability). Furthermore, if a document is attached to a topic t, we assume that all the ancestor topics of t are also relevant for that document. This 7A simple greedy algorithm (Nemhauser et al., 1978) obtains a 1 −1/e approximation guarantee for monotone submodular function maximization assumption has been employed in earlier works (Blei et al., 2004; Bi and Kwok, 2011; Rousu et al., 2006) as well. Given a budget of K, our objective is to choose a set of K topics from V , which best describe the documents in D. The notion of best describing topics is characterized through a set of desirable properties - coverage, diversity, specificity, clarity, relevance and fidelity - that K topics have to satisfy. The submodular functions that we introduce in the next section ensure these properties are satisfied. Formally, we solve the following discrete optimization problem: S∗∈argmax S⊆V :|S|≤K X i wifi(S) (1) where, fi are monotone submodular mixture components and wi ≥0 are the weights associated with those mixture components. Set S∗is the summary topics scored best. It is easy to find massive (i.e., size in the order of million) DAG structured topic hierarchies in practice. Wikipedia’s category hierarchy consists of more than 1M categories (topics) arranged hierarchically. In fact, they form a cyclic graph (Zesch and Gurevych, 2007). However, we can convert it to a DAG by eliminating the cycles as described in the supplementary material. YAGO (Suchanek et al., 2007) and Freebase (Bollacker et al., 2008) are other instances of massive topic hierarchies. The association of the documents with the existing topic hierarchy is also well studied. Systems such as WikipediaMiner (Milne, 2009), TAGME (Ferragina and Scaiella, 2010) and several annotation systems such as (Dill et al., 2003; Mihalcea and Csomai, 2007; Bunescu and Pasca, 2006) attach topics from Wikipedia (and other catalogs) to the documents by establishing the hard or soft links mentioned above. Our goal is the following: Given a (ground set) collection V of topics organized in a pre-existing hierarchical DAG structure, and a collection D of documents, chose a size K ∈Z+ representative subset of topics. Our approach is distinct from earlier work (e.g., (Kanungo et al., 2002; Blei et al., 2003)) where typically only a set of documents is classified and categorized in some way. We next provide a few definitions needed later in the paper. Definition 1: Transitive Cover Γ): A topic t is said to cover a set of documents Γ(t), called the transitive cover of the topic t, if for all documents i ∈Γ(t), either i is associated directly with topic t or with any of the descendant topics of t in the topic DAG. A natural extension of this definition to a set of topics T is defined as Γ(T) = ∪t∈T Γ(t). 556 Definition 2: Truncated Transitive Cover (Γα): This is a transitive cover of topic t, but with the limitation that the path length between a document and the topic t is not more than α. Hence, |Γα(t)| ≤|Γ(t)|. While our problem is closely related to clustering approaches, which consider the set of documents directly, there are some crucial differences. In particular, we focus on producing a clustering of documents where clusters are encouraged to honor a pre-defined DAG structured topic hierarchy. Existing agglomerative clustering algorithms focusing on the coverage of documents may not produce the desired clustering. To understand this, consider six documents d1, d2 ...d6 to be grouped into three clusters. There may be multiple ways to do this depending upon multiple aggregation paths present in the topic DAG: ((d1, d2), (d3, d4), (d5, d6)) or ((d1, d2, d3), (d4, d5), (d6)) or ((d1, d2, d3, d4), (d5), (d6)) or something else. Hence, we need more stringent measures to prefer one clustering over the others. Our work addresses this with a variety of quality criteria (coverage, diversity, specificity, clarity, relevance and fidelity, which are explained later in this paper) that are organically derived from well established submodular functions. And, most importantly, we learn the right mixture of these qualities to be enforced from the data itself. Furthermore, our approach also generalizes these clustering approaches, since one of the components in our mixture of submodular functions is defined via these unsupervised approaches, and maps a given clustering to a set of topics in the hierarchy. 3 Submodular Components and Learning Summarization is the task of extracting information from a source that is both small in size but still representative. Our problem is different from traditional summarization tasks since we have an underlying DAG as a topic hierarchy that we wish to summarize in response to a subset of documents. Thus, a critical part of our problem is to take the graph structure into account while creating the summaries. Below, we identify properties we wish our summaries to posses. Coverage: A summary set of topics should cover most of the documents. A document is said to be covered by a topic if there exists a path from the topic, going through intermediary descendant topics, to the document, i.e., the document is within the transitive cover of the topic. Diversity: Summaries should be as diverse as possible, i.e., each summary topic should cover a unique set of documents. When a document is covered by more than one topic, that document is redundantly covered, e.g., “Finance” and “Banking” would be unlikely members of the same summary. Summary qualities also involve “quality” notions, including: Specificity/Clarity/Relevance/Coherence: These quality measures help us choose a set of topics that are neither too abstract nor overly specific. They ensure that the topics are clear and relevant to the documents that they represent. When additional information such as clustering (from LDA or other sources) and tagging (manual) documents is available, these quality criteria encourage the chosen topics to show resemblance (coherence) to those clustering/tagging in terms of transitive cover of documents they produce. In the below, we define a variety of submodular functions that capture the above properties, and we then describe a large margin learning framework for learning convex mixtures of such components. 3.1 Submodular Components 3.1.1 Coverage Based Functions Coverage components capture “coverage” of a set of documents. Weighted Set Cover Function: Given a set of categories, S ⊆V , define Γ(S) as the set of documents covered — for each topic s ∈S, Γ(s) ⊆D represents the documents covered by topic s and Γ(S) = ∪s∈SΓ(s). The weighted set cover function, defined as f(S) = P d∈Γ(S) wd = w(Γ(S)), assigns weights to the documents based on their relative importance (e.g., in Wikipedia disambiguation, the different documents could be ranked based on their priority). Feature-based Functions: This class of function represents coverage in feature space. Given a set of categories S ⊆V , and a set of features U, define mu(S) as the score associated with the set of categories S for feature u ∈U. The feature set could represent, for example, the documents, in which case mu(S) represents the number of times document u is covered by the set S. U could also represent more complicated features. For example, in the context of Wikipedia disambiguation, U could represent TFIDF features over the documents. Feature based are then defined as f(S) = P u∈U ψ(mu(S)), where ψ is a concave (e.g., the square root) function. This function class has been successfully used in several applications (Kirchhoff and Bilmes, 2014; Wei et al., 2014a; Wei et al., 2014b). 557 3.1.2 Similarity based Functions Similarity functions are defined through a similarity matrix S = {sij}i,j∈V . Given categories i, j ∈V , similarity sij in our case can be defined as sij = |Γ(i)∩Γ(j)|, i.e the number of documents commonly covered by both i and j. Facility Location: The facility location function, defined as f(S) = P i∈V maxj∈S sij, is a natural model for k-medoids and exemplar based clustering, and has been used in several summarization problems (Tschiatschek et al., 2014; Wei et al., 2014a). Penalty based diversity: A similarity matrix may be used to express a form of coverage of a set S but that is then penalized with a redundancy term, as in the following difference: f(S) = P i∈V,j∈S sij −λ P i∈S P j∈S, si,j (Lin and Bilmes, 2011)). Here λ ∈[0, 1]. This function is submodular, but is not in general monotone, and has been used in document summarization (Lin and Bilmes, 2011), as a dispersion function (Borodin et al., 2012), and in image summarization (Tschiatschek et al., 2014). 3.1.3 Quality Control (QC) Functions QC functions ensure a quality criteria is met by a set S of topics. We define the quality score of the set S as Fq (S) = P s∈S fq (s), where fq (s) is the quality score of topic s for quality q. Therefore, Fq (S) is a modular function in S. We investigate three types of quality control functions: Topic Specificity, Topic Clarity, and Topic Relevance. Topic Specificity: The farther a topic is from the root of the DAG, the more specific it becomes. Topics higher up in the hierarchy are abstract and less specific. We therefore prefer topics low in the DAG, but lower topics also have less coverage. We define fspecificity (s) = sh where sh is the height of topic s in the DAG. The root topic has height zero and the “height” increases as we move down the DAG in Figure 1. Topic Clarity: Topic clarity is the fraction of descendant topics that cover one or more documents. If a topic has many descendant topics that do not cover any documents, it has less clarity. Formally, fclarity(s) = P t∈descendants(s)JΓ(t)>0K |descendants(s)| , where JK is the indicator function. Topic Relevance: A topic is considered to be better related to a document if the number of hops needed to reach the document from that topic is lower. Given any set A ⊆D of document, and any topic s ∈V , we can define frelevance (s|A) = argminα{α : A ⊆Γα(s)}. QC Functions As Barrier Modular Mixtures: We introduce a modular function for every QC function as follows f α specificity (s) = 1 if the height of topic s is at least α 0 otherwise for every possible value of α. This creates a submodular mixture with as many components as the number of possible values of α. In our experiments with Wikipedia, we had α varying from 1 to 120 stepping by 1, adding 120 modular mixture components. Similarly, we define, f β clarity (s) = 1 if the clarity of topic s is at least β 0 otherwise for every possible (discretized to make it countably finite) value of β. And, fγ relevance (s) = fcov (s|Γγ (s)), where fcov () is the coverage submodular function and s|X indicates coverage of a topic s over a set of documents X. All these functions (modular and submodular terms) are added as mixture components in our learning framework to learn suitable weights for them. We then use these weights in our inference procedure to obtain a subset of topics as described in 3.2. We show from our experiments that this approach performs better than all other approaches and baselines. 3.1.4 Fidelity Functions A function representing the fidelity of a set S to another reference set R is one that gets a large value when the set S represents the set R. Such a function scores inferred topics high when it resembles a reference set of topics and/or item clusters. The reference set in this case can be produced from other algorithms such as k-means, LDA and its variants or from a manually tagged corpus. Next we describe one such fidelity function. Topic Coherence: This function scores a set of topics S high when the transitive cover (Definition 1) produced by the topics in S resembles the clusters of documents produced by an external source (k-means, LDA or manual). Given an external source that clusters the documents, producing T clusters L1, L2, ..., LT (for T topics), topic coherence is defined as: f(S) = P t∈T maxk∈S wk,t where wk,t = harmonic mean(wp k,t, wr k,t) and wp k,t = |Γ(k)∩Lt| |Γ(k)| and wr k,t = |Γ(k)∩Lt| |Lt| . Note that, wp k,t ≥0 and wr k,t ≥0 are the precision on recall of the resemblance and wk,t is the F1 measure. If the transitive cover of topics in S resembles the reference clusters Lt exactly, we attain maximum coherence (or fidelity). As the resemblance diminishes, the score decreases. The above function f(S) is monotone submodular. 558 3.1.5 Mixture of Submodular Components: Given the different classes of submodular functions above, we construct our submodular scoring functions Fw(·) as a convex combinations of these different submodular functions f1, f2, . . . , fm, above. In other words, Fw(S) = m X i=1 wifi(S), (2) where w = (w1, . . . , wm), wi ≥0, P i wi = 1. The components fi are submodular and assumed to be normalized: i.e., fi(∅) = 0, and fi(V ) = 1 for monotone functions and maxA⊆V fi(A) ≤1 for non-monotone functions. A simple way to normalize a monotone submodular function is to define the component as fi(S)/fi(V ). This ensures that the components are compatible with each other. Obviously, the merit of the scoring function Fw(·) depends on the selection of the components. 3.2 Large Margin Learning We optimize the weights w of the scoring function Fw(·) in a large-margin structured prediction framework. In this setting, we assume we have training data in the form of pairs of a set of documents, and a human generated summary as a set of topics. For example, in the case of Wikipedia disambiguation, we use the human generated disambiguation pages as the ground truth summary. We represent the set of ground-truth summaries as S = {S1, S2, · · · , SN}. In large margin training, the weights are optimized such that ground-truth summaries S are separated from competitor summaries by a loss-dependent margin: Fw(S) ≥Fw(S′) + L(S′), ∀S ∈S, S′ ∈Y \ S, (3) where L(·) is the loss function, and where Y is a structured output space (for example Y is the set of summaries that satisfy a certain budget B, i.e., Y = {S′ ⊆V : |S′| ≤B}). We assume the loss to be normalized, 0 ≤L(S′) ≤1, ∀S′ ⊆V , to ensure that mixture and loss are calibrated. Equation (3) can be stated as Fw(S) ≥maxS′∈Y [Fw(S′) + L(S′)] , ∀S ∈S which is called loss-augmented inference. We introduce slack variables and minimize the regularized sum of slacks (Lin and Bilmes, 2012): min w≥0,∥w∥1=1 X S∈S  max S′∈Y  Fw(S′) + L(S′)  −Fw(S)  + λ 2 ∥w∥2 2, (4) where the non-negative orthant constraint, w ≥0, ensures that the final mixture is submodular. Note a 2-norm regularizer is used on top of a 1-norm constraint ∥w∥1 = 1 which we interpret as a prior to encourage higher entropy, and thus more diverse mixture distributions. Tractability depends on the choice of the loss function. The parameters w are learnt using stochastic gradient descent as in (Tschiatschek et al., 2014). 3.3 Loss Functions A natural choice of loss functions for our case can be derived from cluster evaluation metrics. Every inferred topic s induces a subset of documents, namely the transitive cover Γ (s) of s. We compare these clusters with the clusters induced from the true topics in the training set and compute the loss. In this paper, we use the Jaccard Index (JI) as a loss function. Let S be the inferred topics and T be the true topics. The Jaccard loss is defined as Ljaccard(S, T) = 1 −1 k P s∈S maxt∈T |Γ(s)∩Γ(t)| |Γ(s)∪Γ(t)|, where k = |S| = |T| is the number of topics. When the clustering produced by the inferred and the true topics are similar, Jaccard loss is 0. When they are completely dissimilar, the loss is maximum, i.e., 1. Jaccard loss is a modular function. 3.4 Inference Algorithm: Greedy Having learnt the weights for the mixture components, the resulting function Fw(S) = Pm i=1 wifi(S) is a submodular function. In the case when the individual components are themselves monotone (all our functions in fact are), Fw(S) can be optimized by the accelerated greedy algorithm (Minoux, 1978). Thanks to submodularity, we can obtain near optimal solutions very efficiently. In case the functions are all monotone submodular, we can guarantee that the solution is within 1 −1/e factor from the optimal solution. 4 Experimental Results To validate our approach, we make use of Wikipedia category structure as a topic DAG and apply our technique to the task of automatic generation of Wikipedia disambiguation pages. We pre-processed the category graph to eliminate the cycles in order to make it a DAG. Each Wikipedia disambiguation page is manually created by Wikipedia editors by grouping a collection of Wikipedia articles into several groups. Each group is then assigned a name, which serves as a topic for the group. Typically, a disambiguation page segregates around 20-30 articles into 5-6 groups. Our goal is to measure how accurately we can recreate the groups for a disambiguation page and label them, given only the collection of articles mentioned in that disambiguation page (when actual groupings and labels are hidden). 559 4.1 Datasets We parsed the contents of Wikipedia disambiguation pages and extracted disambiguation page names, article groups and group names. We collected about 8000 disambiguation pages that had at least four groups on them. Wikipedia category structure is used as the topic DAG. We eliminated few administrative categories such as “Hidden Categories”, “Articles needing cleanup”, and the like. The final DAG had about 1M topics and 3M links. 4.2 Evaluation Metrics Every group of articles on the Wikipedia disambiguation page is assigned a name by the editors. Unfortunately, these names may not correspond to the Wikipedia category (topic) names. For example, one of the groups on the “Matrix” disambiguation page has a name “Business and government” and there is no Wikipedia category by that name. However, the group names generated by our (and baseline) method are from the Wikipedia categories (which forms our topic DAG). In addition, there can be multiple relevant names for a group. For example, a group on a disambiguation page may be called “Calculus”, but an algorithm may rightly generate “Vector Calculus”. Hence we cannot evaluate the accuracy of an algorithm just by matching the generated group names to those on the disambiguation page. To alleviate this problem, we adopt cluster-based evaluation metrics. We treat every group of articles generated by an algorithm under a topic for a disambiguation page as a cluster of articles. These are considered as inferred clusters for a disambiguation page. We compare them against the actual grouping of articles on the Wikipedia disambiguation page by treating those groups as true clusters. We can now adopt Jaccard Index, F1-measure, and NMI (Normalized Mutual Information) based cluster evaluation metrics described in (Manning et al., 2008). For each disambiguation page in the test set, we compute every metric score and then average it over all the disambiguation pages. 4.3 Methods Compared We validated our approach by comparing against several baselines described below. We also compared two variations of our approach as described next. For each of these cases (baselines and two variations) we generated and compared the metrics (Jaccard Index, F1-measure and NMI) as described in the previous section. KMdocs: K-Means algorithm run on articles as TF-IDF vectors of words. The number of clusters K is set to the number of true clusters on the Wikipedia disambiguation page. KMeddocs: K-Medoids algorithm with articles as TF-IDF vectors of words. The number of clusters are set as in KMdocs. KMedtopics: K-Medoids run on topics as TFIDF vectors of words. The words for each topic is taken from the articles that are in the transitive cover of the topic. LDAdocs: LDA algorithm with the number of topics set to the number of true clusters on the Wikipedia disambiguation page. Each article is then grouped under the highest probability topic. SMMLcov: This is the submodular mixture learning case explained in section 3.1.5. Here we consider a mixture of all the submodular functions governing coverage, diversity, fidelity and QC functions. However, we exclude the similarity based functions described in section 3.1.2. Coverage based functions have a time complexity of O (n) whereas similarity based functions are O n2 . By excluding similarity based functions, we can compare the quality of the results with and without O(n2) functions. We learn the mixture weights from the training set and use them during inference on the test set to subset K topics through the submodular maximization (Equation 1). SMMLcov+sim: This case is similar to SMMLcov except that, we include similarity based submodular mixture components. This makes the inference time complexity O n2 . We do not compare against HSLDA, PAM and few other techniques cited in the related work sections because they do not produce a subset of K summary topics — these are not directly comparable with our work. 4.4 Evaluation Results We show that the submodular mixture learning and maximization approaches, i.e., SMMLcov and SMMLcov+sim outperform other approaches in various metrics. In all these experiments, we performed 5 fold cross validation to learn the parameters from 80% of the disambiguation pages and evaluated on the rest of the 20%, in each fold. In Figure 2a we summarize the results of the comparison of the methods mentioned above on Jaccard Index, F1 measure and NMI. Our proposed techniques SMMLcov and SMMLcov+sim outperform other techniques consistently. In Figures 2b and 2c we measure the number of test instances (i.e., disambiguation queries) in which each of the algorithms dominate (win) in evaluation metrics. In 60% of the disambiguation queries, SMMLcov and SMMLcov+sim approaches 560 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 JI F1 NMI JI/F1/NMI (a) Comparing metrics with baselines 0 10 20 30 40 50 60 70 JI F1 NMI Win% (b) Winning percentages of SMMLcov against other methods 0 10 20 30 40 50 60 70 JI F1 NMI Win% (c) Winning percentages of SMMLcov+sim against other methods KMdocs KMeddocs KMedtopics LDAdocs SMMLcov SMMLcov+sim Figure 2: Comparison of techniques produce higher JI, F1 and NMI than all other methods. This indicates that the clusters of articles produced by our technique resembles the clusters of articles present on the disambiguation page better than other techniques. From Figures 2b and 2c it is clear that O (n) time complexity based submodular mixture functions (SMMLcov) perform on par with O n2 based functions (SMMLcov+sim), but at a greatly reduced execution time, demonstrating the sufficiency of O (n) functions for our task. On the average, for each disambiguation query, SMMLcov took around 40 seconds (over 1M topics and 3M edges DAG) to infer the topics, whereas SMMLcov+sim took around 35 minutes. Both these experiments were carried on a machine with 32 GB RAM, Eight-Core AMD Opteron(tm) Processor 2427. 5 Conclusions We investigated a problem of summarizing topics over a massive topic DAG such that the summary set of topics produced represents the objects in the collection. This representation is characterized through various classes of submodular (and monotone) functions that captured coverage, similarity, diversity, specificity, clarity, relevance and fidelity of the topics. Currently we assume that the number of topics K is given as an input to our algorithm. It would be an interesting future problem to estimate the value of K automatically in our setting. As future work, we also plan to extend our techniques to produce a hierarchical summary of topics and scale it across heterogeneous collection of objects (from different domains) to bring all of them under the same topic DAG and investigate interesting cases thereon. Acknowledgements: This material is based upon work supported by the National Science Foundation under Grant No. IIS-1162606, and by a Google, a Microsoft, and an Intel research award. Rishabh Iyer acknowledges support from the Microsoft Research Ph.D Fellowship. References Zafer Barutcuoglu, Robert E. Schapire, and Olga G. Troyanskaya. 2006. Hierarchical multi-label prediction of gene function. Bioinformatics, 22(7):830– 836, April. W. Bi and J. T. Kwok. 2011. Multi-label classification on tree-and DAG-structured hierarchies. In ICML. ICML. 561 David M. Blei and John D. Lafferty. 2006. Correlated topic models. In In Proceedings of the 23rd International Conference on Machine Learning, pages 113– 120. MIT Press. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. David M. Blei, Thomas L. Griffiths, Michael I. Jordan, and Joshua B. Tenenbaum. 2004. Hierarchical topic models and the nested chinese restaurant process. In Advances in Neural Information Processing Systems, page 2003. MIT Press. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, SIGMOD ’08, pages 1247–1250, New York, NY, USA. ACM. Allan Borodin, Hyun Chul Lee, and Yuli Ye. 2012. Max-sum diversification, monotone submodular functions and dynamic updates. In Proceedings of Principles of Database Systems, pages 155–166. ACM. Razvan Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In Proceedings of the 11th Conference of the European Chapter of the Association for Computational Linguistics (EACL-06), Trento, Italy, pages 9– 16, April. Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li, and Li Fei-Fei. 2009. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, 2009. CVPR 2009. IEEE Conference on, pages 248–255. IEEE. Stephen Dill, Nadav Eiron, David Gibson, Daniel Gruhl, R. Guha, Anant Jhingran, Tapas Kanungo, Sridhar Rajagopalan, Andrew Tomkins, John A. Tomlin, and Jason Y. Zien. 2003. Semtag and seeker: Bootstrapping the semantic web via automated semantic annotation. In Proceedings of the 12th International Conference on World Wide Web, WWW ’03, pages 178–186, New York, NY, USA. ACM. Paolo Ferragina and Ugo Scaiella. 2010. Tagme: On-the-fly annotation of short text fragments (by wikipedia entities). In Proceedings of the 19th ACM International Conference on Information and Knowledge Management, CIKM ’10, pages 1625– 1628, New York, NY, USA. R. Iyer, S. Jegelka, and J. Bilmes. 2013. Fast semidifferential-based submodular function optimization. ICML. Tapas Kanungo, David M. Mount, Nathan S. Netanyahu, Christine D. Piatko, Ruth Silverman, Angela Y. Wu, Senior Member, and Senior Member. 2002. An efficient k-means clustering algorithm: Analysis and implementation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24:881–892. D. Kempe, J. Kleinberg, and E. Tardos. 2003. Maximizing the spread of influence through a social network. In SIGKDD. Katrin Kirchhoff and Jeff Bilmes. 2014. Submodularity for data selection in machine translation. October. A. Krause and C. Guestrin. 2005. Near-optimal nonmyopic value of information in graphical models. In Proceedings of Uncertainity in Artificial Intelligence. UAI. Wei Li and Andrew McCallum. 2006. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 577–584, New York, NY, USA. ACM. H. Lin and J. Bilmes. 2010. Multi-document summarization via budgeted maximization of submodular functions. In NAACL. Hui Lin and Jeff Bilmes. 2011. A class of submodular functions for document summarization. In The 49th Meeting of the Assoc. for Comp. Ling. Human Lang. Technologies (ACL/HLT-2011), Portland, OR, June. H. Lin and J. Bilmes. 2012. Learning mixtures of submodular shells with application to document summarization. In Conference on Uncertainty in Artificial Intelligence (UAI), page 479490. Hui Lin. 2012. Submodularity in Natural Language Processing: Algorithms and Applications. Ph.D. thesis, University of Washington, Dept. of EE. Min ling Zhang and Zhi hua Zhou. 2007. Ml-knn: A lazy learning approach to multi-label learning. PATTERN RECOGNITION, 40:2007. Arun S. Maiya, John P. Thompson, Francisco LoaizaLemos, and Robert M. Rolfe. 2013. Exploratory analysis of highly heterogeneous document collections. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 1375–1383, New York, NY, USA. ACM. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Sch¨utze. 2008. Introduction to Information Retrieval. Cambridge University Press, New York, NY, USA. Olena Medelyan, Ian H. Witten, and David Milne. 2008. Topic indexing with Wikipedia. In Proceedings of the Wikipedia and AI workshop at AAAI-08. AAAI. 562 Qiaozhu Mei, Xuehua Shen, and ChengXiang Zhai. 2007. Automatic labeling of multinomial topic models. In Proceedings of the 13th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’07, pages 490–499, New York, NY, USA. ACM. Rada Mihalcea and Andras Csomai. 2007. Wikify!: Linking documents to encyclopedic knowledge. In Proceedings of the Sixteenth ACM Conference on Conference on Information and Knowledge Management, CIKM ’07, pages 233–242, New York, NY, USA. ACM. David Milne. 2009. An open-source toolkit for mining wikipedia. In In Proc. New Zealand Computer Science Research Student Conf, page 2009. Michel Minoux. 1978. Accelerated greedy algorithms for maximizing submodular set functions. In J. Stoer, editor, Optimization Techniques, volume 7 of Lecture Notes in Control and Information Sciences, chapter 27, pages 234–243. Springer Berlin Heidelberg, Berlin/Heidelberg. Mukund Narasimhan and Jeff Bilmes. 2004. PAClearning bounded tree-width graphical models. In Uncertainty in Artificial Intelligence: Proceedings of the Twentieth Conference (UAI-2004). Morgan Kaufmann Publishers, July. George L Nemhauser, Laurence A Wolsey, and Marshall L Fisher. 1978. An analysis of approximations for maximizing submodular set functionsi. Mathematical Programming, 14(1):265–294. Adler J. Perotte, Frank Wood, Noemie Elhadad, and Nicholas Bartlett. 2011. Hierarchically supervised latent dirichlet allocation. In John Shawe-Taylor, Richard S. Zemel, Peter L. Bartlett, Fernando C. N. Pereira, and Kilian Q. Weinberger, editors, NIPS, pages 2609–2617. Juho Rousu, Craig Saunders, Sndor Szedmk, and John Shawe-Taylor. 2006. Kernel-based learning of hierarchical multilabel classification models. Journal of Machine Learning Research, 7:1601–1626. Jr. Silla, CarlosN. and AlexA. Freitas. 2011. A survey of hierarchical classification across different application domains. Data Mining and Knowledge Discovery, 22(1-2):31–72. Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: A core of semantic knowledge. In Proceedings of the 16th International Conference on World Wide Web, WWW ’07, pages 697– 706, New York, NY, USA. ACM. Yee Whye Teh, Michael I. Jordan, Matthew J. Beal, and David M. Blei. 2006. Hierarchical dirichlet processes. Journal of the American Statistical Association, 101(476):1566–1581. Sebastian Tschiatschek, Rishabh Iyer, Hoachen Wei, and Jeff Bilmes. 2014. Learning Mixtures of Submodular Functions for Image Collection Summarization. In Neural Information Processing Systems (NIPS). Grigorios Tsoumakas, Ioannis Katakis, and Ioannis Vlahavas. 2010. Mining multi-label data. In Oded Maimon and Lior Rokach, editors, Data Mining and Knowledge Discovery Handbook, pages 667–685. Springer US. Kai Wei, Rishabh Iyer, and Jeff Bilmes. 2014a. Fast multi-stage submodular maximization. In ICML. Kai Wei, Yuzong Liu, Katrin Kirchhoff, Chris Bartels, and Jeff Bilmes. 2014b. Submodular subset selection for large-scale speech training data. Proceedings of ICASSP, Florence, Italy. Torsten Zesch and Iryna Gurevych. 2007. Analysis of the wikipedia category graph for nlp applications. In Proceedings of the TextGraphs-2 Workshop (NAACL-HLT), pages 1–8, Rochester, April. Association for Computational Linguistics. 563
2015
54
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 564–574, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning to Explain Entity Relationships in Knowledge Graphs Nikos Voskarides∗ University of Amsterdam [email protected] Edgar Meij Yahoo Labs, London [email protected] Manos Tsagkias 904Labs, Amsterdam [email protected] Maarten de Rijke University of Amsterdam [email protected] Wouter Weerkamp 904Labs, Amsterdam [email protected] Abstract We study the problem of explaining relationships between pairs of knowledge graph entities with human-readable descriptions. Our method extracts and enriches sentences that refer to an entity pair from a corpus and ranks the sentences according to how well they describe the relationship between the entities. We model this task as a learning to rank problem for sentences and employ a rich set of features. When evaluated on a large set of manually annotated sentences, we find that our method significantly improves over state-of-the-art baseline models. 1 Introduction Knowledge graphs are a powerful tool for supporting a large spectrum of search applications including ranking, recommendation, exploratory search, and web search (Dong et al., 2014). A knowledge graph aggregates information around entities across multiple content sources and links these entities together, while at the same time providing entity-specific properties (such as age or employer) and types (such as actor or movie). Although there is a growing interest in automatically constructing knowledge graphs, e.g., from unstructured web data (Weston et al., 2013; Craven et al., 2000; Fan et al., 2012), the problem of providing evidence on why two entities are related in a knowledge graph remains largely unaddressed. Extracting and presenting evidence for linking two entities, however, is an important aspect of knowledge graphs, as it can enforce trust between the user and a search engine, which in turn can improve long-term user engagement, e.g., in the context of related entity recommendation (Blanco et al., 2013). Although knowledge ∗This work was carried out while this author was visiting Yahoo Labs. graphs exist that provide this functionality to a certain degree (e.g., when hovering over Google’s suggested entities, see Figure 1), to the best of our knowledge there is no previously published research on methods for entity relationship explanation. Figure 1: Part of Google’s search result page for the query “barack obama”. When hovering over the related entity “Michelle Obama”, an explanation of the relationship between her and “Barack Obama” is shown. In this paper we propose a method for explaining the relationship between two entities, which we evaluate on a newly constructed annotated dataset that we make publicly available. In particular, we consider the task of explaining relationships between pairs of Wikipedia entities. We aim to infer a human-readable description for an entity pair given a relationship between the two entities. Since Wikipedia does not explicitly define relationships between entities we use a knowledge graph to obtain these relations. We cast our task as a sentence ranking problem: we automatically extract sentences from a corpus and rank 564 them according to how well they describe a given relationship between a pair of entities. For ranking purposes, we extract a rich set of features and use learning to rank to effectively combine them. Our feature set includes both traditional information retrieval and natural language processing features that we augment with entity-dependent features. These features leverage information from the structure of the knowledge graph. On top of this, we use features that capture the presence in a sentence of the relationship of interest. For our evaluation we focus on “people” entities and we use a large, manually annotated dataset of sentences. The research questions we address are the following. First, we ask what the effectiveness of state-of-the-art sentence retrieval models is for explaining a relationship between two entities (RQ1). Second, we consider whether we can improve over sentence retrieval models by casting the task in a learning to rank framework (RQ2). Third, we examine whether we can further improve performance by using relationship-dependent models instead of a relationship-independent one (RQ3). We complement these research questions with an error and feature analysis. Our main contributions are a robust and effective method for explaining entity relationships, detailed insights into the performance of our method and features, and a manually annotated dataset. 2 Related Work We combine ideas from sentence retrieval, learning to rank, and question answering to address the task of explaining relationships between entities. Previous work that is closest to the task we address in this paper is that of Blanco and Zaragoza (2010) and Fang et al. (2011). First, Blanco and Zaragoza (2010) focus on finding and ranking sentences that explain the relationship between an entity and a query. Our work is different in that we want to explain the relationship between two entities, rather than a query. Fang et al. (2011) explore the generation of a ranked list of knowledge base relationships for an entity pair. Instead, we try to select sentences that describe a particular relationship, assuming that this is given. Our approach builds on sentence retrieval, where one retrieves sentences rather than documents that answer an information need. Document retrieval models such as tf-idf, BM25, and language modeling (Baeza-Yates et al., 1999) have been extended to tackle sentence retrieval. Three of the most successful sentence retrieval methods are TFISF (Allan et al., 2003), which is a variant of the vector space model with tf-idf weighting, language modeling with local context (Murdock, 2006; Fern´andez et al., 2011), and a recursive version of TFISF that accounts for local context (Doko et al., 2013). TFISF is very competitive compared to document retrieval models tuned specifically for sentence retrieval (e.g., BM25 and language modeling (Losada, 2008)) and we therefore include it as a baseline. Sentences that are suitable for explaining relationships can have attributes that are important for ranking but cannot be captured by term-based retrieval models. One way to combine a wide range of ranking features is learning to rank (LTR). Recent years have witnessed a rapid increase in the work on learning to rank, and it has proven to be a very successful method for combining large numbers of ranking features, for web search, but also other information retrieval applications (Burges et al., 2011; Surdeanu et al., 2011; Agarwal et al., 2012). We use learning to rank and represent each sentence with a set of features that aim to capture different dimensions of the sentence. Question answering (QA) is the task of providing direct and concise answers to questions formed in natural language (Hirschman and Gaizauskas, 2001). QA can be regarded as a similar task to ours, assuming that the combination of entity pair and relationship form the “question” and that the “answer” is the sentence describing the relationship of interest. Even though we do not follow the QA paradigm in this paper, some of the features we use are inspired by QA systems. In addition, we employ learning to rank to combine the devised features, which has recently been successfully applied for QA (Surdeanu et al., 2011; Agarwal et al., 2012). 3 Problem Statement We address the problem of explaining relationships between pairs of entities in a knowledge graph. We operationalize the problem as a problem of ranking sentences from documents in a corpus that is related to the knowledge graph. More specifically, given two entities ei and ej that form an entity pair ⟨ei,ej⟩, and a relation r between them, the task is to extract a set of can565 didate sentences Sij = {sij1,...,sijk} that refer to ⟨ei,ej⟩and to impose a ranking on the sentences in Sij. The relation r has the general form ⟨type(ei),terms(r),type(ej)⟩, where type(e) is the type of the entity e (e.g., Person or Actor) and terms(r) are the terms of the relation (e.g., CoCastsWith or IsSpouseOf). We are left with two specific tasks: (1) extracting candidate sentences Sij, and (2) ranking Sij, where the goal is to have sentences that provide a perfect explanation of the relationship at the top position of the ranking. The next section describes our methods for both tasks. 4 Explaining Entity Relationships We follow a two-step approach for automatically explaining relationships between entity pairs. First, in Section 4.1, we extract and enrich sentences that refer to an entity pair ⟨ei,ej⟩from a corpus in order to construct a set of candidate sentences. Second, in Section 4.2, we extract a rich set of features describing the entities’ relationship r and use supervised machine learning in order to rank the sentences in Sij according to how well they describe the relationship r. 4.1 Extracting candidate sentences To create a set of candidate sentences for a given entity pair and relationship, we require a corpus of documents that is pertinent to the entities at hand. Although any kind of document collection can be used, we focus on Wikipedia in this paper, as it provides good coverage for the majority of entities in our knowledge graph. First, we extract surface forms for the given entities: the title of the entity’s Wikipedia article (e.g., “Barack Obama”), the titles of all redirect pages linking to that article (e.g., “Obama”), and all anchor text associated with hyperlinks to the article within Wikipedia (e.g., “president obama”). We then split all Wikipedia articles into sentences and consider a sentence as a candidate if (i) the sentence is part of either entities’ Wikipedia article and contains a surface form of, or a link to, the other entity; or (ii) the sentence contains surface forms of, or links to, both entities in the entity pair. Next, we apply two sentence enrichment steps for (i) making sentences self-contained and readable outside the context of the source document and (ii) linking the sentences to entities. For (i), we replace pronouns in candidate sentences with the title of the entity. We apply a simple heuristic for the people entities, inspired by (Wu and Weld, 2010):1 we count the frequency of the terms “he” and “she” in the article for determining the gender of the entity, and we replace the first appearance of “he” or “she” in each sentence with the entity’s title. We skip this step if any surface form of the entity occurs in the sentence. For (ii), we apply entity linking to provide links from the sentence to additional entities (Milne and Witten, 2008). This need arises from the fact that not every sentence in an article contains explicit links to the entities it mentions, as Wikipedia guidelines only allow one link to another article in the article’s text.2 The algorithm takes a sentence as input and iterates over n-grams that are not yet linked to an entity. If an n-gram matches a surface form of an entity, we establish a link between the n-gram and the entity. We restrict our search space to entities that are linked from within the source article of the sentence and from within articles to which the source article links. This way, our entity linking method achieves high precision as almost no disambiguation is necessary. As an example, consider the sentence “He gave critically acclaimed performances in the crime thriller Seven...” on the Wikipedia page for Brad Pitt. After applying our enrichment steps, we obtain “Brad Pitt gave critically acclaimed performances in the crime thriller Seven...”, making the sentence human readable and link to the entities Brad Pitt and Seven (1995 film). 4.2 Ranking sentences After extracting candidate sentences, we rank them by how well they describe the relationship of interest r between entities ei and ej. There are many signals beyond simple term statistics that can indicate relevance. Automatically constructing a ranking model using supervised machine learning techniques is therefore an obvious choice. For ranking we use learning to rank (LTR) and represent each sentence with a rich set of features. Table 1 lists the features we use. Below we provide 1We experimented with the Stanford co-reference resolution system (Lee et al., 2011) and Apache OpenNLP and found that they were not able to consistently achieve the level of effectiveness that we require. 2http://en.Wikipedia.org/wiki/ Wikipedia:Manual_of_Style/Linking 566 # Name Gloss Text features 1 Sentence length Length of s in words 2 Sum of idf Sum of IDF of terms of s in Wikipedia 3 Average idf Average IDF of terms of s in Wikipedia 4 Sentence density Lexical density of s, see Equation 1 (Lee et al., 2001) 5–8 POS fractions Fraction of verbs, nouns, adjectives, others in s (Mintz et al., 2009) Entity features 9 #entities Total number of entities in s 10 Link to ei Whether s contains a link to the entity ei 11 Link to ej Whether s contains a link to the entity ej 12 Links to ei and ej Whether s contains links to both entities ei and ej 13 Entity first Is ei or ej the first entity in the sentence? 14 Spread of ei, ej Distance between the last match of ei and ej in s (Blanco and Zaragoza, 2010) 15–22 POS fractions left/right Fraction of verbs, nouns, adjectives, others to the left/right window of ei and ej in s (Mintz et al., 2009) 23–25 #entities left/right/between Number of entities to the left/right or between entities ei and ej in s 26 common links ei, ej Whether s contains any common link of ei and ej 27 #common links The number of common links of ei and ej in s 28 Score common links ei, ej Sum of the scores of the common links of ei and ej in s 29–30 #common links prev/next The number of common links of ei and ej in previous/next sentence of s Relationship features 31 Match terms(r)? Whether s contains any term in terms(r) 32 Match wordnet(r)? Whether s contains any phrase in wordnet(r) 33 Match word2vec(r)? Whether s contains any phrase in word2vec(r) 34–36 or’s Boolean OR of feature 31 and one or both of features 32 and 33 37–38 or(31, 32, 33) prev/next Boolean OR of features 31, 32, 33 for the previous/next sentence of s 39 Average word2vec(r) Average cosine similarity of phrases in word2vec(r) that are matched in s 40 Maximum word2vec(r) Maximum cosine similarity of phrases in word2vec(r) that are matched in s 41 Sum word2vec(r) Sum of cosine similarity of phrases in word2vec(r) that are matched in s 42 Score LC Lucene score of s with titles(ei, ej), terms(r), wordnet(r), word2vec(r) as query 43 Score R-TFISF R-TFISF score of s with queries constructed as above Source features 44 Sentence position Position of s in document from which it originates 45 From ei or ej? Does s originate from the Wikipedia article of ei or ej? 46 #(ei or ej) Number of occurrences of ei or ej in document from which s originates, inspired by document smoothing for sentence retrieval (Murdock and Croft, 2005) Table 1: Features used for sentence ranking. a brief description of the more complex ones. Text features This feature type regards the importance of the sentence s at the term level. We compute the density of s (feature 4) as: density(s) = 1 K ⋅(K + 1) n ∑ j=1 idf(tj) ⋅idf(tj+1) distance(tj,tj+1)2 , (1) where K is the number of keyword terms in s and distance(tj,tj+1) is the number of nonkeyword terms between keyword terms tj and tj+1. We treat stop words and numbers in s as nonkeywords and the remaining terms as keywords. Features 5–8 capture the distribution of part-ofspeech tags in the sentence. Entity features These features partly build on (Tsagkias et al., 2011; Meij et al., 2012) and describe the entities and are dependent on the knowledge graph. Whether ei or ej is the first appearing entity in a sentence might be an indicator of importance (feature 13). The spread of ei and ej in the sentence (feature 14) might be an indicator of their centrality in the sentence (Blanco and Zaragoza, 2010). Features 15–22 capture the distribution of part-of-speech tags in the sentence in a window of four words around ei or ej in s (Mintz et al., 2009), complemented by the number of entities between, to the left of, and to the right of the entity pair (features 23–25). We assume that two articles that have many common articles that point to them are strongly related (Witten and Milne, 2008). We hypothesize that, if a sentence contains common inlinks from ei and ej, the sentence might contain important information about their relationship. Hence, we add whether the sentence contains a common link (fea567 ture 26) and the number of common links (feature 27) as features. We score a common link l between ei and ej using: score(l,ei,ej) = sim(l,ei) ⋅sim(l,ej), (2) where sim(⋅,⋅) is defined as the similarity between two Wikipedia articles, computed using a variant of Normalized Google Distance (Witten and Milne, 2008). Feature 28 then measures the sum of the scores of the common links. Previous research shows that using surrounding sentences is beneficial for sentence retrieval (Doko et al., 2013). We therefore consider the number of common links in the previous and next sentence (features 29–30). Relationship features Feature 31 indicates whether any of the relationship-specific terms occurs in the sentence. Only matching the terms in the relationship may have low coverage since terms such as “spouse” may have many synonyms and/or highly related terms, e.g., “husband” or “married”. Therefore, we use WordNet to find synonym phrases of r (feature 32); we refer to this method as wordnet(r). Alternatively, we use word embeddings to find such similar phrases (Mikolov et al., 2013). Such embeddings take a text corpus as input and learn vector representations of words and phrases consisting of real numbers. Given the set Vr consisting of the vector representations of all the relationship terms and the set V which consists of the vector representations of all the candidate phrases in the data, we calculate the distance between a candidate phrase represented by a vector vi ∈V and the vectors in Vr as: distance(vi,V ) = cos⎛ ⎝vi, ∑ vj∈Vr vj ⎞ ⎠, (3) where ∑vj∈Vr vj is the element-wise sum of the vectors in Vr and the distance between two vectors v1 and v2 is measured using cosine similarity. The candidate phrases in V are then ranked using Equation 3 and the top-m phrases are selected, resulting in features 33, 39, 40, and 41; we refer to the ranked set of phrases that are selected using this procedure as word2vec(r). In addition, we employ state-of-the-art retrieval functions and include the scores for queries that are constructed using the entities ei and ej, the relation r, wordnet(r), and word2vec(r). We use the titles of the entity articles titles(e) to represent the entities in the query and two ranking functions, Recursive TFISF (R-TFISF) and LC,3 (features 42–43). TFISF is a sentence retrieval model that determines the level of relevance of a sentence s given a query q as: R(s,q) =∑ t∈q log(tf t,q + 1)⋅ log(tf t,s + 1) ⋅log ( n + 1 0.5 + sf t ), (4) where tf t,q and tf t,s are the number of occurrences of term t in the query q and the sentence s respectively, sf t is the number of sentences in which t appears, and n is the number of sentences in the collection. R-TFISF is an improved extension of the TFISF method (Doko et al., 2013), which incorporates context from neighboring sentences in the ranking function: Rc(s,q) = (1 −µ)R(s,q)+ (5) µ[Rc(sprev(s),q) + Rc(snext(s),q)], where µ is a free parameter and sprev(s) and snext(s) indicate functions to retrieve the previous and next sentence, respectively. We use a maximum of three recursive calls. Source features Here, we refer to features that are dependent on the source document of the sentences. We have three such features. 5 Experimental setup In this section we describe the dataset, manual annotations, learning to rank algorithm, and evaluation metrics that we use to answer our research questions. 5.1 Dataset We draw entities and their relationships from a proprietary knowledge graph that is created from Wikipedia, Freebase, IMDB, and other sources, and that is used by the Yahoo web search engine. We focus on “people” entities and relationships between them.4 For our experiments we need to select a manageable set of entities, which we obtain as follows. We consider a year of query logs 3In preliminary experiments R-TFISF and LC were the best performing among a pool of sentence retrieval methods. 4Note that, except for the co-reference resolution step described in Section 4.1, our method does not depend on this restriction. 568 from a large commercial search engine, count the number of times a user clicks on a Wikipedia article of an entity in the results page and perform stratified sampling of entities according to this distribution. As we are bounded by limited resources for our manual assessments, we sample 1 476 entity pairs that together with nine unique relationship types form our experimental dataset. We use an English Wikipedia dump dated July 8, 2013, containing approximately 4M articles, of which 50 638 belong to “people” entities that are also in our knowledge graph. We extract sentences using the approach described in Section 4.1, resulting in 36 823 candidate sentences for our entities. On average we have 24.94 sentences per entity pair (maximum 423 and minimum 0). Because of the large variance, it is not feasible to obtain exhaustive annotations for all sentences. We rank the sentences using R-TFISF and keep the top-10 sentences per entity pair for annotation. This results in a total of 5 689 sentences. Five human annotators provided relevance judgments, manually judging sentences based on how well they describe the relationship for an entity pair, for which we use a five-level graded relevance scale (perfect, excellent, good, fair, bad).5 Of all relevance grades 8.1% is perfect, 15.69% excellent, 19.98% good, 8.05% fair, and 48.15% bad. Out of 1 476 entity pairs, 1 093 have at least one sentence annotated as fair. As is common in information retrieval evaluation, we discard entity pairs that have only “bad” sentences. We examine the difficulty of the task for human annotators by measuring inter-annotator agreement on a subset of 105 sentences that are judged by 3 annotators. Fleiss’ kappa is k = 0.449, which is considered to be moderate agreement. 5.2 Machine learning For ranking sentences we use a Random Forest (RF) classifier (Breiman, 2001).6 We set the number of iterations to 300 and the sampling rate to 0.3. Experiments with varying these two parameters did not show any significant differences. We also tried several feature normalization methods, none of them being able to significantly outper5https://github.com/nickvosk/acl2015dataset-learning-to-explain-entityrelationships 6In preliminary experiments, we contrasted RF with gradient boosted regression trees and LambdaMART and found that RF consistently outperformed other methods. Baseline NDCG@1 NDCG@10 ERR@1 ERR@10 B1 0.7508 0.8961 0.3577 0.4531 B2 0.7511 0.8958 0.3584 0.4530 B3 0.7595 0.8997 0.3696 0.4600 B4 0.7767 0.9070 0.3774 0.4672 B5 0.7801 0.9093 0.3787 0.4682 Table 2: Results for five baseline variants. See text for their description and significant differences. form the runs without feature normalization. We obtain POS tags using the Stanford part-ofspeech tagger and filter out a standard list of 33 English stopwords. For the word embeddings we use word2vec and train our model on all text in Wikipedia using negative sampling and the continuous bag of words architecture. We set the size of the phrase vectors to 500 and m = 30. 5.3 Evaluation metrics We employ two main evaluation metrics in our experiments, NDCG (J¨arvelin and Kek¨al¨ainen, 2002) and ERR (Chapelle et al., 2009). The former measures the total accumulated gain from the top of the ranking that is discounted at lower ranks and is normalized by the ideal cumulative gain. The latter models user behavior and measures the expected reciprocal rank at which a user will stop her search. We consider these rankingbased graded evaluation metrics at two cut-off points: position 1, corresponding to showing a single sentence to a user, and 10, which accounts for users who might look at more results. We report on NDCG@1, NDCG@10, ERR@1, ERR@10, and Exc@1, which indicates whether we have an “excellent” or “perfect” sentence at the top of the ranking. Likewise, Per@1 indicates whether we have a “perfect” sentence at the top of the ranking (not all entity pairs have an excellent or a perfect sentence). We perform 5-fold cross validation and test for statistical significance using a paired two-tailed ttest. We depict a significant difference in performance for p < 0.01 with ▲(gain) and ▼(loss) and for p < 0.05 with △(gain) and ▽(loss). Boldface indicates the best score for a metric. 6 Results and Analysis We compare the performance of typical document retrieval models and state-of-the-art sentence retrieval models in order to answer RQ1. We consider five sentence retrieval models: Lucene ranking (LC), language modeling with Dirichlet 569 Has one # pairs # sentences Method NDCG@1 NDCG@10 ERR@1 ERR@10 Exc@1 Per@1 fair 1 093 4 435 B5 0.7801 0.9093 0.3787 0.4682 – – LTR 0.8489▲ 0.9375▲ 0.4242▲ 0.4980▲ – – good 1 038 4 285 B5 0.7742 0.9078 0.3958 0.4894 – – LTR 0.8486▲ 0.9374▲ 0.4438▲ 0.5208▲ – – excellent 752 3 387 B5 0.7455 0.8999 0.4858 0.5981 0.7314 – LTR 0.8372▲ 0.9340▲ 0.5500▲ 0.6391▲ 0.8298▲ – perfect 339 1 687 B5 0.7082 0.8805 0.6639 0.7878 0.7729 0.6136 LTR 0.8150▲ 0.9245▲ 0.7640▲ 0.8518▲ 0.8909▲ 0.7227▲ Table 3: Results for the best baseline (B5) and the learning to rank method (LTR). smoothing (LM), BM25, TFISF, and Recursive TF-ISF (R-TFISF). We follow related work and set µ = 0.1 for R-TFISF, k = 1 and b = 0 for BM25 and µ = 250 for LM (Fern´andez et al., 2011). In our experiments, a query q is constructed using various combinations of surface forms of the two entities ei and ej and the relationship r. Each entity in the entity pair can be represented by its title, the titles of any redirect pages pointing to the entity’s article, the n-grams used as anchors in Wikipedia to link to the article of the entity, or the union of them all. The relationship r can be represented by the terms in the relationship, synonyms in wordnet(r), or by phrases in word2vec(r). First, we fix the way we represent r. Baseline B1 does not include any representation of r in the query. B2 includes the relationship terms of r, while B3 includes the relationship terms of r and the synonyms in wordnet(r). B4 includes the terms of r and the phrases in word2vec(r), and B5 includes the relationship terms of r, the synonyms in wordnet(r) and the phrases in word2vec(r). Combining these variations with the entity representations, we find that all combinations that use the titles as representation and R-TFISF as the retrieval function outperform all other combinations.7 This can be explained by the fact that titles are least ambiguous, thus reducing the possibility of accidentally referring to other entities. BM25 and LC perform almost as well as R-TFISF, with only insignificant differences in performance. Table 2 shows the best performing combination of each baseline, i.e., varying the representation of r and using titles and R-TFISF. B4 and B5 are the best performing baselines, suggesting that word2vec(r) and wordnet(r) are beneficial. B5 significantly outperforms all baselines except B4. We also experiment with a supervised combina7We omit a full table of results due to space constraints. tion of the baseline rankers using LTR. Here, we consider each baseline ranker as a separate feature and train a ranking model. The trained model is not able to outperform the best individual baseline, however. 6.1 Learning to rank sentences Next, we provide the results of our method using the features described in Section 4.2, exploring whether our learning to rank (LTR) approach improves over sentence retrieval models (RQ2). We compare an LTR model using Table 1’s features against the best baseline (B5). Table 3 shows the results. Each group in the table contains the results for the entity pairs that have at least one candidate sentence of that relevance grade for B5 and LTR. We find that LTR significantly outperforms B5 by a large margin. The absolute performance difference between LTR and B5 becomes larger for all metrics as we move from “fair” to “perfect,” which shows that LTR is more robust than the baseline for entity pairs that have at least one high quality candidate sentence. LTR ranks the best possible sentence at the top of the ranking for ∼83% of the cases for entity pairs that contain an “excellent” sentence and for ∼72% of the cases for entity pairs that contain a “perfect” sentence. Note that, as indicated in Section 5.1, we discard entity pairs that have only “bad” sentences in our experiments. For the sake of completeness, we report on the results for all entity pairs in our dataset—including those without any relevant sentences—in Table 4. 6.2 Relationship-dependent models Relevant sentences may have different properties for different relationship types. For example, a sentence describing two entities being partners would have a different form than one describing that two entities costar in a movie. A similar 570 Has one # pairs # sentences Method NDCG@1 NDCG@10 ERR@1 ERR@10 Exc@1 Per@1 1 476 5 689 B5 0.5776 0.6733 0.2804 0.3467 – – LTR 0.6285▲ 0.6940▲ 0.3155▲ 0.3694▲ – – Table 4: Results for the best baseline (B5) and the learning to rank method (LTR), using all entity pairs in the dataset, including those without any relevant sentences. Relationship # pairs # sentences NDCG@1 NDCG@10 ERR@1 ERR@10 ⟨MovieActor, CoCastsWith, MovieActor⟩ 410 1 403 0.8604 0.9436 0.3809 0.4546 ⟨TvActor, CoCastsWith, TvActor⟩ 210 626 0.8729 0.9482 0.3271 0.3845 ⟨MovieActor, IsDirectedBy, MovieDirector⟩ ⟨MovieDirector, Directs, MovieActor⟩ 112 492 0.8795 0.9396 0.4709 0.5261 ⟨Person, isChildOf , Person⟩ ⟨Person, isParentOf , Person⟩ 108 716 0.8428 0.9081 0.6395 0.7136 ⟨Person, isPartnerOf , Person⟩ ⟨Person, isSpouseOf , Person⟩ 155 877 0.8623 0.9441 0.6153 0.6939 ⟨Athlete, PlaysSameSportTeamAs, Athlete⟩ 98 321 0.8787 0.9535 0.3350 0.3996 Average results over all data 1 093 4 435 0.8661 0.9395 0.4615 0.5287 LTR (Table 3; fair) 0.8489 0.9375 0.4242 0.4980 Table 5: Results for relationship-dependent models. Similar relationships are grouped together. idea was investigated in the context of QA for associating question and answer types (Yao et al., 2013). To answer (RQ3) we examine whether learning a relationship-dependent model improves over learning a single model for all types. We split our dataset per relationship type and train a model per type using 5-fold cross-validation within each. Table 5 shows the results.8 Our method is robust across different relationships in terms of NDCG. However, we observe some variation in ERR as this metric is more sensitive to the distribution of relevant items than NDCG—the distribution over relevance grades varies per relationship type. For example, it is much more likely to find candidate sentences that have a high relevance grade for ⟨Person, isSpouseOf , Person⟩than for ⟨Athlete, PlaysSameSportTeamAs, Athlete⟩in our dataset. We plan to address this issue by exploring other corpora in the future. The second-to-last row in Table 5 shows the averaged results over the different relationship types, which is a significant improvement over LTR at p < 0.01 for all metrics. This method ranks the best possible sentence at the top of the ranking for ∼85% of the cases for entity pairs that contain an “excellent” sentence (∼2% absolute improvement over LTR) and for ∼75% of the cases for entity pairs that contain a “perfect” sentence (∼3% absolute improvement over LTR). 8We omit Exc@1 and Per@1 due to space constraints. 6.3 Feature type analysis Next, we analyze the impact of the feature types. Table 6 shows how performance varies when removing one feature type at a time from the full feature set. Relationship type features are the most important, although entity type features are important as well. This indicates that introducing features based on entities identified in the sentences and the relationship is beneficial for this task. Furthermore, the limited dependency on the source feature type indicates that our method might be able to generalize in other domains. Finally, text type features do contribute to retrieval effectiveness, although not significantly. Note that calculating the sentence features is straightforward, as none of our features requires heavy linguistic analysis. Features NDCG@1 NDCG@10 ERR@1 ERR@10 All 0.8661 0.9395 0.4615 0.5287 All∖text 0.8620 0.9372 0.4606 0.5274 All∖source 0.8598 0.9372 0.4582 0.5261 All∖entity 0.8421▽ 0.9282▼ 0.4497 0.5202▽ All∖relation 0.8183▼ 0.9201▼ 0.4352▼ 0.5112▼ Table 6: Results using relationship-dependent models, removing individual feature types. 6.4 Error analysis When looking at errors made by the system, we find that some are due to the fact that entity pairs might have more than one relationship (e.g., ac571 tors that costar in movies also being partners) but the selected sentence covers only one of the relationships.9 For example, Liza Minnelli is the daughter of Judy Garland, but they have also costarred in a movie, which is the relationship of interest. The model ranks the sentence “Liza Minnelli is the daughter of singer and actress Judy Garland. . . ” at the top, while the most relevant sentence is: “Judy Garland performed at the London Palladium with her then 18-year-old daughter Liza Minnelli in November 1964.” Sentences that contain the relationship in which we are interested, but for which this cannot be directly inferred, are another source of error. Consider, for example, the following sentence, which explains director Christopher Nolan directed actor Christian Bale: “Jackman starred in the 2006 film The Prestige, directed by Christopher Nolan and costarring Christian Bale, Michael Caine, and Scarlett Johansson”. Even though the sentence contains the relationship of interest, it focuses on actor Hugh Jackman. The sentence “In 2004, after completing filming for The Machinist, Bale won the coveted role of Batman and his alter ego Bruce Wayne in Christopher Nolan’s Batman Begins. . . ”, in contrast, refers to the two entities and the relationship of interest directly, resulting in a higher relevance grade. Our method, however, ranks the first sentence on top, as it contains more phrases that refer to the relationship. 7 Conclusions and Future Work We have presented a method for explaining relationships between knowledge graph entities with human-readable descriptions. We first extract and enrich sentences that refer to an entity pair and then rank the sentences according to how well they describe the relationship. For ranking, we use learning to rank with a diverse set of features. Evaluation on a manually annotated dataset of “people” entities shows that our method significantly outperforms state-of-the-art sentence retrieval models for this task. Experimental results also show that using relationship-dependent models is beneficial. In future work we aim to evaluate how our method performs on entities and relationships of 9The annotators marked sentences that do not refer to the relationship of interest as “bad” but indicated whether they describe another relationship or not. We plan to account for such cases in future work. any type and popularity, including tail entities and miscellaneous relationships. We also want to investigate moving beyond Wikipedia and extract candidate sentences from documents that are not related to the knowledge graph, such as web pages or news articles. Employing such documents also implies an investigation into more advanced coreference resolution methods. Our analysis showed that sentences may cover different relationships between entities or different aspects of a single relationship—we aim to account for such cases in follow-up work. Furthermore, sentences may contain unnecessary information for explaining the relation of interest between two entities. Especially when we want to show the obtained results to end users, we may need to apply further processing of the sentences to improve their quality and readability. We would like to explore sentence compression techniques to address this. Finally, relationships between entities have an inherit temporal nature and we aim to explore ways to explain entity relationships and their changes over time. Acknowledgments This research was partially supported by the European Community’s Seventh Framework Programme (FP7/2007-2013) under grant agreement nr 312827 (VOX-Pol), the Netherlands Organisation for Scientific Research (NWO) under project nrs 727.011.005, 612.001.116, HOR-11-10, 640.006.013, 612.066.930, CI-14-25, SH-322-15, Amsterdam Data Science, the Dutch national program COMMIT, the ESF Research Network Program ELIAS, the Elite Network Shifts project funded by the Royal Dutch Academy of Sciences (KNAW), the Netherlands eScience Center under project nr 027.012.105, the Yahoo! Faculty Research and Engagement Program, the Microsoft Research PhD program, and the HPC Fund. References Arvind Agarwal, Hema Raghavan, Karthik Subbian, Prem Melville, Richard D. Lawrence, David C. Gondek, and James Fan. 2012. Learning to rank for robust question answering. In Proceedings of the 21st ACM international conference on Information and knowledge management, pages 833–842. ACM. James Allan, Courtney Wade, and Alvaro Bolivar. 2003. Retrieval and novelty detection at the sentence level. In Proceedings of the 26th annual international ACM SIGIR conference on Research and 572 development in informaion retrieval, pages 314– 321. ACM. Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval, volume 463. ACM press New York. Roi Blanco and Hugo Zaragoza. 2010. Finding support sentences for entities. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 339–346. ACM. Roi Blanco, Berkant Barla Cambazoglu, Peter Mika, and Nicolas Torzec. 2013. Entity recommendations in web search. In The Semantic Web–ISWC 2013, pages 33–48. Springer. Leo Breiman. 2001. Random forests. Mach. Learn., 45(1):5–32. Christopher J.C. Burges, Krysta Marie Svore, Paul N. Bennett, Andrzej Pastusiak, and Qiang Wu. 2011. Learning to rank using an ensemble of lambdagradient models. In Yahoo! Learning to Rank Challenge, pages 25–35. Olivier Chapelle, Donald Metzler, Ya Zhang, and Pierre Grinspan. 2009. Expected reciprocal rank for graded relevance. In Proceedings of the 18th ACM conference on Information and knowledge management, pages 621–630. ACM. Mark Craven, Dan DiPasquo, Dayne Freitag, Andrew McCallum, Tom Mitchell, Kamal Nigam, and Se´an Slattery. 2000. Learning to construct knowledge bases from the world wide web. Artificial Intelligence, 118(1–2):69–113. Alen Doko, Maja ˇStula, and Darko Stipaniˇcev. 2013. A recursive TF-ISF based sentence retrieval method with local context. International Journal of Machine Learning and Computing, 3(2):195–200. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: A web-scale approach to probabilistic knowledge fusion. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, pages 601– 610. ACM. James Fan, Raphael Hoffman, Aditya Kalyanpur, Sebastian Riedel, Fabian Suchanek, and Pratim Partha Talukdar, 2012. Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX), chapter Proceedings of the Joint Workshop on Automatic Knowledge Base Construction and Web-scale Knowledge Extraction (AKBC-WEKEX). Association for Computational Linguistics. Lujun Fang, Anish Das Sarma, Cong Yu, and Philip Bohannon. 2011. Rex: explaining relationships between entity pairs. Proceedings of the VLDB Endowment, 5(3):241–252. Ronald T Fern´andez, David E. Losada, and Leif Azzopardi. 2011. Extending the language modeling framework for sentence retrieval to include local context. Information Retrieval, 14(4):355–389. Lynette Hirschman and Robert Gaizauskas. 2001. Natural language question answering: the view from here. Natural Language Engineering, 7(04):275– 300. Kalervo J¨arvelin and Jaana Kek¨al¨ainen. 2002. Cumulated gain-based evaluation of IR techniques. ACM Transactions on Information Systems (TOIS), 20(4):422–446. Gary Geunbae Lee, Jungyun Seo, Seungwoo Lee, Hanmin Jung, Bong-Hyun Cho, Changki Lee, ByungKwan Kwak, Jeongwon Cha, Dongseok Kim, JooHui An, et al. 2001. SiteQ: Engineering high performance QA system using lexico-semantic pattern matching and shallow NLP. In TREC. Heeyoung Lee, Yves Peirsman, Angel Chang, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2011. Stanford’s multi-pass sieve coreference resolution system at the CoNLL-2011 shared task. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task, pages 28–34. Association for Computational Linguistics. David E. Losada. 2008. A study of statistical query expansion strategies for sentence retrieval. In Proceedings of the SIGIR 2008 Workshop on Focused Retrieval, pages 37–44. Edgar Meij, Wouter Weerkamp, and Maarten de Rijke. 2012. Adding semantics to microblog posts. In Proceedings of the fifth ACM international conference on Web search and data mining, pages 563– 572. ACM. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. David Milne and Ian H. Witten. 2008. Learning to link with Wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge management, pages 509–518. ACM. Mike Mintz, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1003–1011. Association for Computational Linguistics. Vanessa Murdock and W. Bruce Croft. 2005. A translation model for sentence retrieval. In Proceedings of the conference on Human Language Technology 573 and Empirical Methods in Natural Language Processing, pages 684–691. Association for Computational Linguistics. Vanessa Graham Murdock. 2006. Aspects of Sentence Retrieval. Ph.D. thesis, University of Massachusetts Amherst. Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to nonfactoid questions from web collections. Computational Linguistics, 37(2):351–383. Manos Tsagkias, Maarten de Rijke, and Wouter Weerkamp. 2011. Linking online news and social media. In WSDM 2011: Fourth ACM International Conference on Web Search and Data Mining. ACM, February. Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013. Ian Witten and David Milne. 2008. An effective, lowcost measure of semantic relatedness obtained from wikipedia links. In Proceeding of AAAI Workshop on Wikipedia and Artificial Intelligence: an Evolving Synergy, AAAI Press, Chicago, USA, pages 25– 30. Fei Wu and Daniel S Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 118–127. Association for Computational Linguistics. Xuchen Yao, Benjamin Van Durme, and Peter Clark. 2013. Automatic coupling of answer extraction and information retrieval. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 159– 165. Association for Computational Linguistics. 574
2015
55
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 575–585, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Bring you to the past: Automatic Generation of Topically Relevant Event Chronicles Tao Ge1,2, Wenzhe Pei1, Heng Ji3, Sujian Li1,2, Baobao Chang1,2, Zhifang Sui1,2 1Key Laboratory of Computational Linguistics, Ministry of Education, School of EECS, Peking University, Beijing, 100871, China 2Collaborative Innovation Center for Language Ability, Xuzhou, Jiangsu, 221009, China 3Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180, USA [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] Abstract An event chronicle provides people with an easy and fast access to learn the past. In this paper, we propose the first novel approach to automatically generate a topically relevant event chronicle during a certain period given a reference chronicle during another period. Our approach consists of two core components – a timeaware hierarchical Bayesian model for event detection, and a learning-to-rank model to select the salient events to construct the final chronicle. Experimental results demonstrate our approach is promising to tackle this new problem. 1 Introduction Human civilization has developed for thousands of years. During the long period, history witnessed the changes of societies and dynasties, the revolution of science and technology, as well as the emergency of celebrities, which are great wealth for later generations. Even nowadays, people usually look back through history either for their work or interests. Among various ways to learn history, many people prefer reading an event chronicle summarizing important events in the past, which saves much time and efforts. The left part of Figure 1 shows a disaster event chronicle from Infoplease1, by which people can easily learn important disaster events in 2009. Unfortunately, almost all the available event chronicles are created and edited manually, which requires editors to learn everything that happened in the past. Even if an editor tries her best to generate an event chronicle, she still cannot guarantee that all the important events are included. Moreover, when new events happen in the future, she 1http://www.infoplease.com/world/disasters/2009.html needs to update the chronicle in time, which is laborious. For example, the event chronicle of 2010 in Wikipedia2 has been edited 8,488 times by 3,211 distinct editors since this page was created. In addition, event chronicles can vary according to topic preferences. Some event chronicles are mainly about disasters while others may focus more on sports. For people interested in sports, the event chronicle in Figure 1 is undesirable. Due to the diversity of event chronicles, it is common that an event chronicle regarding a specific topic for some certain period is unavailable. If editing an event chronicle can be done by computers, people can have an overview of any period according to their interests and do not have to wait for human editing, which will largely speed up knowledge acquisition and popularization. Based on this motivation, we propose a new task of automatic event chronicle generation, whose goal is to generate a topically relevant event chronicle for some period based on a reference chronicle of another period. For example, if an disaster event chronicle during 2009 is available, we can use it to generate a disaster chronicle during 2010 from a news collection, as shown in Figure 1. To achieve this goal, we need to know what events happened during the target period, whether these events are topically relevant to the chronicle, and whether they are important enough to be included, since an event chronicle has only a limited number of entries. To tackle these challenges, we propose an approach consisting of two core components – an event detection component based on a novel time-aware hierarchical Bayesian model and a learning-to-rank component to select the salient events to construct the final chronicle. Our event detection model can not only learn topic preferences of the reference chronicle and measure topical relevance of an event to the chronicle 2http://en.wikipedia.org/wiki/2010 575 Figure 1: Example for automatic generation of a topically relevant event chronicle. but also can effectively distinguish similar events by taking into account time information and eventspecific details. Experimental results show our approach significantly outperforms baseline methods and that is promising to tackle this new problem. The major novel contributions of this paper are: • We propose a new task automatic generation of a topically relevant event chronicle, which is meaningful and has never been studied to the best of our knowledge. • We design a general approach to tackle this new problem, which is languageindependent, domain-independent and scalable to any arbitrary topics. • We design a novel event detection model. It outperforms the state-of-the-art event detection model for generating topically relevant event chronicles. 2 Terminology and Task Overview Figure 2: An example of relevance-topic-event hierarchical structure for a disaster event chronicle. As shown in Figure 1, an event (entry) in an event chronicle corresponds to a specific occurrence in the real world, whose granularity depends on the chronicle. For a sports chronicle, an event entry may be a match in 2010 World Cup, while for a comprehensive chronicle, the World Cup is regarded as one event. In general, an event can be represented by a cluster of documents related to it. The topic of an event can be considered as the event class. For example, we can call the topic of MH17 crash as air crash (fine-grained) or disaster (coarse-grained). The relation between topic and event is shown through the example in Figure 2. An event chronicle is a set of important events occurring in the past. Event chronicles vary according to topic preferences. For the disaster chronicle shown in Figure 1, earthquakes and air crashes are relevant topics while election is not. Hence, we can use a hierarchical structure to organize documents in a corpus, as Figure 2 shows. Formally, we define an event chronicle E = {e1, e2, ..., en} where ei is an event entry in E and it can be represented by a tuple ei = ⟨Dei, tei, zei⟩. Dei denotes the set of documents about ei, tei is ei’s time and zei is ei’s topic. Specially, we use Λ to denote the time period (interval) covered by E, and θ to denote the topic distribution of E, which reflects E’s topic preferences. As shown in Figure 1, the goal of our task is to generate an (target) event chronicle ET during ΛT based on a reference chronicle ER during ΛR. The topic distributions of ET and ER (i.e., θT and θR) should be consistent. 3 Event Detection 3.1 Challenges of Event Detection Figure 3: Documents that are lexically similar but refer to different events. The underlined words are event-specific words. 576 For our task, the first step is to detect topically relevant events from a corpus. A good event detection model should be able to (1) measure the topical relevance of a detected event to the reference chronicle. (2) consider document time information. (3) look into a document’s event-specific details. The first requirement is to identify topically relevant events since we want to generate a topically relevant chronicle. The second and third requirements are for effectively distinguishing events, especially similar events like the example in Figure 3. To distinguish the similar events, we must consider document time information (for distinguishing events in d1 and d2) and look into the document’s event-specific details (the underlined words in Figure 3) (for distinguishing events in d1 and d3). 3.2 TaHBM: A Time-aware Hierarchical Bayesian Model To tackle all the above challenges mentioned in Section 3.1, which cannot be tackled by conventional detection methods (e.g., agglomerative clustering), we propose a Time-aware Hierarchical Bayesian Model (TaHBM) for detecting events. Model Overview Figure 4: The plate diagram of TaHBM. The shaded nodes are observable nodes. The plate diagram and generative story of TaHBM are depicted in Figure 4 and Figure 5 respectively. For a corpus with M documents, TaHBM assumes each document has three labels – s, z, and e. s is a binary variable indicating a document’s topical relevance to the reference event chronicle, whose distribution is a Bernoulli distribution πs drawn from a Beta distribution with Draw πs ∼Beta(γs) For each s ∈{0, 1}: draw θ(s) ∼Dir(α) For each z = 1, 2, 3, ..., K: draw φ(z) ∼ Dir(ε), ψ(z) z ∼Dir(βz) For each e = 1, 2, 3, ..., E: draw ψ(e) e ∼ Dir(βe) For each document m = 1, 2, 3, ..., M: Draw s ∼Bernoulli(πs) Draw z ∼Multi(θ(s)) Draw e ∼Multi(φ(z)) Draw t′ ∼Gaussian(µe, σe), t ←⌊t′⌋ Draw πx ∼Beta(γx) For each word w in document m: Draw x ∼Bernoulli(πx) If x = 0: draw w ∼ψ(z) z Else: draw w ∼ψ(e) e Figure 5: The generative story of TaHBM symmetric hyperparameter γs. s=1 indicates the document is topically relevant to the chronicle while s=0 means not. z is a document’s topic label drawn from a K-dimensional multinomial distribution θ, and e is a document’s event label drawn from an E-dimensional multinomial distribution φ. θ and φ are drawn from Dirichlet distributions with symmetric hyperparameter α and ε respectively. For an event e′, it can be represented by a set of documents whose event label is e′. In TaHBM, the relations among s, z and e are similar to the hierarchical structure in Figure 2. Based on the dependencies among s, z and e, we can compute the topical relevance of an event to the reference chronicle by Eq (1) where P(e|z), P(e), P(s) and P(z|s) can be estimated using Bayesian inference (some details of estimation of P(s) and P(s|z) will be discussed in Section 3.3) and thus we solve the first challenge in Section 3.1 (i.e., topical relevance measure problem). P(s|e) = P(s) × P(z|s) × P(e|z) P(e) (1) Now, we introduce how to tackle the second challenge – how to take into account a document’s time information for distinguishing events. In TaHBM, we introduce t, document timestamps. We assume t = ⌊t′⌋where t′ is drawn from a Gaussian distribution with mean µ and variance σ2. Each event e corresponds to a specific Gaussian distribution which serves as a temporal con577 straint for e. A Gaussian distribution has only one peak around where the probability is concentrated. Its value trends to zero if a point lies far away from the mean. For this reason, a Gaussian distribution is suitable to describe an event’s temporal distribution whose probability usually concentrates around the event’s burst time and it will be close to zero if time lies far from the burst time. Figure 6 shows the temporal distribution of the July 2009 Urumqi riots3. The probability of this event concentrates around the 7th day. If we use a Gaussian distribution (the dashed curve in Figure 6) to constrain this event’s time scope, the documents whose timestamps are beyond this scope are unlikely to be grouped into this event’s cluster. Now that the problems of topical relevance measure and temporal constraints have been solved, we discuss how to identify event-specific details of a document for distinguishing events. By analyzing the documents shown in Figure 3, we find that general words (e.g., earthquake, kill, injury, devastate) indicate the document’s topic while words about event-specific details (e.g., Napa, California, 3.4-magnitude) are helpful to determine what events the document talks about. Assuming a person is asked to analyze what event a document discusses, it would be a natural way to first determine topic of the document based its general words, and then determine what event it talks about given its topic, timestamp and eventspecific details, which is exactly the way our TaHBM works. For simplicity, we call the general words as topic words and call the words describing eventspecific information as event words. Inspired by the idea of Chemudugunta et al. (2007), given the different roles these two kinds of words play, we assume words in a document are generated by two distributions: topic words are generated by a topic word distribution ψz while event words are generated by an event word distribution ψe. ψz and ψe are |V |-dimensional multinomial distributions drawn from Dirichlet distributions with symmetric hyperparameter βz and βe respectively, where |V | denotes the size of vocabulary V . A binary indicator x, which is generated by a Bernoulli distribution πx drawn from a Beta distribution with symmetric hyperparameter γx, determines whether a word is generated by ψz or ψe. Specifically, if x = 0, a word is drawn from ψz; otherwise the 3http://en.wikipedia.org/wiki/July 2009 Urumqi riots Figure 6: The temporal distribution of documents about the Urumqi riots, which can be described by a Gaussian distribution (the dashed curve). The horizontal axis is time (day) and the vertical axis is the number of documents about this event. word is drawn from ψe. Since ψz is shared by all events of one topic, it can be seen as a background word distribution which captures general aspects. In contrast, ψe tends to describe the event-specific aspects. In this way, we can model a document’s general and specific aspects and use the information to better distinguish similar events4. Model Inference Like most Bayesian models, we use collapsed Gibbs sampling for model inference in TaHBM. For a document m, we present the conditional probability of its latent variables s, z and x for sampling: P(sm|⃗s¬m, ⃗z, γs, α) = cs + γs P s(cs + γs) × cs,zm + α P z(cs,z + α) (2) P(zm|⃗z¬m,⃗e,⃗s, ⃗wm, ⃗xm, α, ε, βz) = csm,z + α P z(csm,z + α) × cz,em + ε P e(cz,e + ε) × Nm Y n=1 (cz,wm,n + Pn−1 i=1 1(wm,i = wm,n) + βz P w∈V (cz,w + βz) + n −1 )(1−xm,n) (3) P(xm,n|⃗wm, ⃗x¬m,n, zm, em, γx) = cm,x + γx Nm + 2γx × ( czm,wm,n + βz P w∈V (czm,w + βz))(1−x) × ( cem,wm,n + βe P w∈V (cem,w + βe))x (4) where V denotes the vocabulary, wm,n is the nth word in a document m, cs is the count of documents with topic relevance label s, cs,z is the count 4TaHBM is language-independent, which can identify event words without name tagging. But if name tagging results are available, we can also exploit them (e.g., we can fix x of a named entity specific to an event to 1 during inference.). 578 of documents with topic relevance label s and topic label z, cz,w is the count of word w whose document’s topic label is z, cm,x is the count of words with binary indicator label x in m and 1(·) is an indicator function. Specially, for variable e which is dependent on the Gaussian distribution, its conditional probability for sampling is computed as Eq (5): P(em|⃗e¬m, ⃗z, ⃗wm, ⃗xm, tm, ε, βe, µe, σe) = czm,e + ε P e(czm,e + ε) × Z tm+1 tm pG(tm; µe, σ′ e) × Nm Y n=1 (ce,wm,n + Pn−1 i=1 1(wm,i = wm,n) + βe P w∈V (ce,w + βe) + n −1 )xm,n (5) where pG(x; µ, σ) is a Gaussian probability mass function with parameter µ and σ. The function pG(·) can be seen as the temporal distribution of an event, as discussed before. In this sense, the temporal distribution of the whole corpus can be considered as a mixture of Gaussian distributions of events. As a natural way to estimate parameters of mixture of Gaussians, we use EM algorithm (Bilmes, 1998). In fact, Eq (5) can be seen as the E-step. The M-step of EM updates µ and σ as follows: µe = P d∈De td |De| , σe = sP d∈De(td −µe)2 |De| (6) where td is document d’s timestamp and De is the set of documents with event label e. Specially, for sampling e we use σ′ e defined as σ′ e = σe + τ (τ is a small number for smoothing5) because when σ is very small (e.g., σ = 0), an event’s temporal scope will be strictly constrained. Using σ′ e can help the model overcome this “trap” for better parameter estimation. Above all, the model inference and parameter estimation procedure can be summarized by algorithm 1. 3.3 Learn Topic Preferences of the Event Chronicle A prerequisite to use Eq (1) to compute an event’s topical relevance to an event chronicle is that we know P(s) and P(z|s) which reflects topic preferences of the event chronicle. Nonetheless, P(s) and P(z|s) vary according to different event chronicles. Hence, we cannot directly estimate 5τ is set to 0.5 in our experiments. Algorithm 1 Model inference for TaHBM 1: Initialize parameters in TaHBM; 2: for each iteration do 3: for each document d in the corpus do 4: sample s according to Eq (2) 5: sample z according to Eq (3) 6: sample e according to Eq (5) 7: for each word w in d do 8: sample x according to Eq (4) 9: end for 10: end for 11: for each event e do 12: update µe, σe according to Eq (6) 13: end for 14: end for them in an unsupervised manner; instead, we provide TaHBM some “supervision”. As we mentioned in section 3.2, the variable s indicates a document’s topical relevance to the event chronicle. For some documents, s label can be easily derived with high accuracy so that we can exploit the information to learn the topic preferences. To obtain the labeled data, we use the description of each event entry in the reference chronicle ER during period ΛR as a query to retrieve relevant documents in the corpus using Lucene (Jakarta, 2004) which is an information retrieval software library. We define R as the set of documents in hits of any event entry in the reference chronicle returned by Lucene: R = ∪e∈ERHit(e) where Hit(e) is the complete hit list of event e returned by Lucene. For document d with timestamp td, if d /∈R and td ∈ΛR, then d is considered irrelevant to the event chronicle and thus it would be labeled as a negative example. To generate positive examples, we use a strict criterion since we cannot guarantee that all the documents in R are actually relevant. To precisely generate positive examples, a document d is labeled as positive only if it satisfies the positive condition which is defined as follows: ∃e∈ER0 ≤td −te ≤10 ∧sim(d, e) ≥0.4 where te is time6 of event e, provided by the reference chronicle. sim(d, e) is Lucene’s score of d given query e. According to the positive condition, a positive document example must be lexi6The time unit of td and te is one day. 579 cally similar to some event in the reference chronicle and its timestamp is close to the event’s time. As a result, we can use the labeled data to learn topic preferences of the event chronicle. For the labeled documents, s is fixed during model inference. In contrast, for documents that are not labeled, s is sampled by Eq (2). In this manner, TaHBM can learn topic preferences (i.e., P(z|s)) without any manually labeled data and thus can measure the topical relevance between an event and the reference chronicle. 4 Event Ranking Generating an event chronicle is beyond event detection because we cannot use all detected events to generate the chronicle with a limited number of entries. We propose to use learning-to-rank techniques to select the most salient events to generate the final chronicle since we believe the reference event chronicle can teach us the principles of selecting salient events. Specifically, we use SVMRank (Joachims, 2006). 4.1 Training and Test Set Generation The event detection component returns many document clusters, each of which represents an event. As Section 3.2 shows, each event has a Gaussian distribution whose mean indicates its burst time in TaHBM. We use the events whose burst time is during the reference chronicle’s period as training examples and treat those during the target chronicle’s period as test examples. Formally, the training set and test set are defined as follows: Train = {e|µe ∈ΛR}, Test = {e|µe ∈ΛT } In the training set, events containing at least one positive document (i.e. relevant to the event chronicle) in Section 3.3 are labeled as high rank priority while those without positive documents are labeled as low priority. 4.2 Features We use the following features to train the ranking model, all of which can be provided by TaHBM. • P(s = 1|e): the probability that an event e is topically relevant to the reference chronicle. • P(e|z): the probability reflects an event’s impact given its topic. • σe: the parameter of an event e’s Gaussian distribution. It determines the ‘bandwidth’ of the Gaussian distribution and thus can be considered as the time span of e. • |De|: the number of documents related to event e, reflecting the impact of e. • |De| σe : For an event with a long time span (e.g., Premier League), the number of relevant documents is large but its impact may not be profound. Hence, we use |De| σe to normalize |De|, which may better reflect the impact of e. 5 Experiments 5.1 Experiment Setting Data: We use various event chronicles during 2009 as references to generate their counterparts during 2010. Specifically, we collected disaster, sports, war, politics and comprehensive chronicles during 2009 from mapreport7, infoplease and Wikipedia8. To generate chronicles during 2010, we use 2009-2010 APW and Xinhua news in English Gigaword (Graff et al., 2003) and remove documents whose titles and first paragraphs do not include any burst words. We detect burst words using Kleinberg algorithm (Kleinberg, 2003), which is a 2-state finite automaton model and widely used to detect bursts. In total, there are 140,557 documents in the corpus. Preprocessing: We remove stopwords and use Stanford CoreNLP (Manning et al., 2014) to do lemmatization. Parameter setting: For TaHBM, we empirically set α = 0.05, βz = 0.005, βe = 0.0001, γs = 0.05, γx = 0.5, ε = 0.01, the number of topics K = 50, and the number of events E = 5000. We run Gibbs sampler for 2000 iterations with burn-in period of 500 for inference. For event ranking, we set regularization parameter of SVMRank c = 0.1. Chronicle display: We use a heuristic way to generate the description of each event. Since the first paragraph of a news article is usually a good summary of the article and the earliest document in a cluster usually explicitly describes the event, for an event represented by a document cluster, we choose the first paragraph of the earliest document written in 2010 in the cluster to generate the event’s description. The earliest document’s timestamp is considered as the event’s time. 7http://www.mapreport.com 8http://en.wikipedia.org/wiki/2009 580 5.2 Evaluation Methods and Baselines Since there is no existing evaluation metric for the new task, we design a method for evaluation. Although there are manually edited event chronicles on the web, which may serve as references for evaluation, they are often incomplete. For example, the 2010 politics event chronicle on Wikipedia has only two event entries. Hence, we first pool all event entries of existing chronicles on the web and chronicles generated by approaches evaluated in this paper and then have 3 human assessors judge each event entry for generating a ground truth based on its topical relevance, impact and description according to the standard of the reference chronicles. An event entry will be included in the ground-truth only if it is selected as a candidate by at least two human judges. On average, the existing event chronicles on the web cover 50.3% of event entries in the ground-truth. Given the ground truth, we can use Precision@k to evaluate an event chronicle’s quality. Precision@k = |EG ∩Etopk|/k where EG and Etopk are ground-truth chronicle and the chronicle with top k entries generated by an approach respectively. If there are multiple event entries corresponding to one event in the ground-truth, only one is counted. For comparison, we choose several baseline approaches. Note that event detection models except TaHBM do not provide features used in learningto-rank model. For these detection models, we use a criterion that considers both relevance and importance to rank events: rankscorebasic(e) = X d∈De maxe′∈ERsim(d, e′) where ER is the reference chronicle and sim(d, e′) is Lucene’s score of document d given query e′. We call this ranking criterion as basic criterion. • Random: We randomly select k documents to generate the chronicle. • NB+basic: Since TaHBM is essentially an extension of NB, we use Naive Bayes (NB) to detect events and basic ranking criterion to rank events. • B-HAC+basic: We use hierarchical agglomerative clustering (HAC) based on BurstVSM schema (Zhao et al., 2012) to detect events, which is the state-of-the-art event detection method for general domains. • TaHBM+basic: we use this baseline to verify the effectiveness of learning-to-rank. As TaHBM, the number of clusters in NB is set to 5000 for comparison. For B-HAC, we adopt the same setting with (Zhao et al., 2012). 5.3 Experiment Results Using the evaluation method introduced above, we can conduct a quantitative evaluation for event chronicle generation approaches9. Table 1 shows the overall performance. Our approach outperforms the baselines for all chronicles. TaHBM beats other detection models for chronicle generation owing to its ability of incorporating the temporal information and identification of event-specific details of a document. Moreover, learning-to-ranking is proven more effective to rank events than the basic ranking criterion. Among these 5 chronicles, almost all approaches perform best on disaster event chronicle while worst on sports event chronicle. We analyzed the results and found that many event entries in the sports event chronicle are about the opening match, or the first-round match of a tournament due to the display method described in Section 5.1. According to the reference sport event chronicle, however, only matches after quarterfinals in a tournament are qualified to be event entries. In other words, a sports chronicle should provide information about the results of semi-final and final, and the champion of the tournament instead of the first-round match’s result, which accounts for the poor performance. In contrast, the earliest document about a disaster event always directly describes the disaster event while the following reports usually concern responses to the event such as humanitarian aids and condolence from the world leaders. The patterns of reporting war events are similar to those of disasters, thus the quality of war chronicle is also good. Politics is somewhat complex because some political events (e.g., election) are arranged in advance while others (e.g., government shutdown) are unexpected. It is notable that for generating comprehensive event chronicles, learning-to-rank does 9Due to the space limitation, we display chronicles generated by our approach in the supplementary notes. 581 sports politics disaster war comprehensive P@50 P@100 P@50 P@100 P@50 P@100 P@50 P@100 P@50 P@100 Random 0.02 0.08 0 0 0.02 0.04 0 0 0.02 0.03 NB+basic 0.08 0.12 0.18 0.19 0.42 0.36 0.18 0.17 0.38 0.31 B-HAC+basic 0.10 0.13 0.30 0.26 0.50 0.47 0.30 0.22 0.36 0.32 TaHBM+basic 0.18 0.15 0.30 0.29 0.50 0.43 0.46 0.36 0.38 0.33 Our approach 0.20 0.15 0.38 0.36 0.64 0.53 0.54 0.41 0.40 0.33 Table 1: Performance of event chronicle generation. Topically Irrelevant Trivial Events Indirect Description Redundant Entries disaster 31.91% 17.02% 44.68% 6.38% sports 38.82% 55.29% 3.52% 2.35% comp 67.16% 31.34% 1.49% Table 2: Proportion of errors in disaster, sports and comprehensive event chronicles. not show significant improvement. A possible reason is that a comprehensive event chronicle does not care the topical relevance of a event. In other words, its ranking problem is simpler so that the learning-to-rank does not improve the basic ranking criterion much. Moreover, we analyze the incorrect entries in the chronicles generated by our approaches. In general, there are four types of errors. Topically irrelevant: the topic of an event entry is irrelevant to the event chronicle. Minor events: the event is not important enough to be included. For example, “20100828: Lebanon beat Canada 81-71 in the opening round of the basketball world championships” is a minor event in the sports chronicle because it is about an opening-round match and not important enough. Indirect description: the entry does not describe a major event directly. For instance, “20100114: Turkey expressed sorrow over the Haiti earthquake” is an incorrect entry in the disaster chronicle though it mentions the Haiti earthquake. Redundant entries: multiple event entries describe the same event. We analyze the errors of the disaster, sports and comprehensive event chronicle since they are representative, as shown in Table 2. Topical irrelevance is a major error source for both disaster and sports event chronicles. This problem mainly arises from incorrect identification of topically relevant events during detection. Moreover, disaster and sports chronicles have their own more serious problems. Disaster event chronicles suffer from the indirect description problem since there are many responses (e.g., humanitarian aids) to a disaster. These responses are topically relevant and contain many documents, and thus appear in the top list. One possible solution might be to increase the event granularity by adjusting parameters of the detection model so that the documents describing a major event and those discussing in response to this event can be grouped into one cluster (i.e., one event). In contrast, the sports event chronicle’s biggest problem is on minor events, as mentioned before. Like the sports chronicle, the comprehensive event chronicle also has many minor event entries but its main problem results from its strict criterion. Since comprehensive chronicles can include events of any topic, only extremely important events can be included. For example, “Netherlands beat Uruguay to reach final in the World Cup 2010” may be a correct event entry in sports chronicles but it is not a good entry in comprehensive chronicles. Compared with comprehensive event chronicles, events in other chronicles tend to describe more details. For example, a sports chronicle may regard each match in the World Cup as an event while comprehensive chronicles consider the World Cup as one event, which requires us to adapt event granularity for different chronicles. Also, we evaluate the time of event entries in these five event chronicles because event’s happening time is not always equal to the timestamp of the document creation time (UzZaman et al., 2012; Ge et al., 2013). We collect existing manually edited 2010 chronicles on the web and use their event time as gold standard. We define a metric to evaluate if the event entry’s time in our chronicle is accurate: diff= P e∈E∩E∗|(te −t∗ e)|/|E ∩E∗| where E and E∗are our chronicle and the manually edited event chronicle respectively. te is e’s 582 time labeled by our method and t∗ e is e’s correct time. Note that for multiple entries referring the same event in event chronicles, the earliest entry’s time is used as the event’s time to compute diff. sports politics disaster war comprehensive 0.800 3.363 1.042 1.610 2.467 Table 3: Difference between an event’s actual time and the time in our chronicles. Time unit is a day. Table 3 shows the performance of our approach in labeling event time. For disaster, sports and war, the accuracy is desirable since important events about these topics are usually reported in time. In contrast, the accuracy of political event time is the lowest. The reason is that some political events may be confidential and thus they are not reported as soon as they happen; on the other hand, some political events (e.g., a summit) are reported several days before the events happen. The comprehensive event chronicle includes many political events, which results in a lower accuracy. 6 Related Work To the best of our knowledge, there was no previous end-to-end topically relevant event chronicle generation work but there are some related tasks. Event detection, sometimes called topic detection (Allan, 2002), is an important part of our approach. Yang et al. (1998) used clustering techniques for event detection on news. He et al. (2007) and Zhao et al. (2012) designed burst feature representations for detecting bursty events. Compared with our TaHBM, these methods lack the ability of distinguishing similar events. Similar to event detection, event extraction focuses on finding events from documents. Most work regarding event extraction (Grishman et al., 2005; Ahn, 2006; Ji and Grishman, 2008; Chen and Ji, 2009; Liao and Grishman, 2010; Hong et al., 2011; Li et al., 2012; Chen and Ng, 2012; Li et al., 2013) was developed under Automatic Content Extraction (ACE) program. The task only defines 33 event types and events are in much finer grain than those in our task. Moreover, there was work (Verhagen et al., 2005; Chambers and Jurafsky, 2008; Bethard, 2013; Chambers, 2013; Chambers et al., 2014) about temporal event extraction and tracking. Like ACE, the granularity of events in this task is too fine to be suitable for our task. Also, timeline generation is related to our work. Most previous work focused on generating a timeline for a document (Do et al., 2012), a centroid entity (Ji et al., 2009) or one major event (Hu et al., 2011; Yan et al., 2011; Lin et al., 2012; Li and Li, 2013). In addition, Li and Cardie (2014) generated timelines for users in microblogs. The most related work to ours is Swan and Allan (2000). They used a timeline to show bursty events along the time, which can be seen as an early form of event chronicles. Different from their work, we generate a topically relevant event chronicle based on a reference event chronicle. 7 Conclusions and Future Work In this paper, we propose a novel task – automatic generation of topically relevant event chronicles. It can serve as a new framework to combine the merits of Information Retrieval, Information Extraction and Summarization techniques, to rapidly extract and rank salient events. This framework is also able to rapidly and accurately capture a user’s interest and needs based on the reference chronicle (instead of keywords as in Information Retrieval or event templates as in Guided Summarization) which can reflect diverse levels of granularity. As a preliminary study of this new challenge, this paper focuses on event detection and ranking. There are still many challenges for generating high-quality event chronicles. In the future, we plan to investigate automatically adapting an event’s granularity and learn the principle of summarizing the event according to the reference event chronicle. Moreover, we plan to study the generation of entity-driven event chronicles, leveraging more fine-grained entity and event extraction approaches. Acknowledgments We thank the anonymous reviewers for their thought-provoking comments. This work is supported by National Key Basic Research Program of China 2014CB340504, NSFC project 61375074, China Scholarship Council (CSC, No. 201406010174) and USA ARL NS-CTA No. W911NF-09-2-0053. The contact author of this paper is Zhifang Sui. References David Ahn. 2006. The stages of event extraction. In Workshop on Annotating and Reasoning about Time and Events. 583 James Allan. 2002. Topic detection and tracking: event-based information organization, volume 12. Springer Science & Business Media. Steven Bethard. 2013. Cleartk-timeml: A minimalist approach to tempeval 2013. In Second Joint Conference on Lexical and Computational Semantics. Jeff A Bilmes. 1998. A gentle tutorial of the em algorithm and its application to parameter estimation for gaussian mixture and hidden markov models. International Computer Science Institute, 4(510):126. Nathanael Chambers and Dan Jurafsky. 2008. Jointly combining implicit constraints improves temporal ordering. In EMNLP. Nathanael Chambers, Taylor Cassidy, Bill McDowell, and Steven Bethard. 2014. Dense event ordering with a multi-pass architecture. TACL, 2:273–284. Nathanael Chambers. 2013. Navytime: Event and time ordering from raw text. Technical report, DTIC Document. Chaitanya Chemudugunta and Padhraic Smyth Mark Steyvers. 2007. Modeling general and specific aspects of documents with a probabilistic topic model. In NIPS. Zheng Chen and Heng Ji. 2009. Language specific issue and feature exploration in chinese event extraction. In NAACL. Chen Chen and Vincent Ng. 2012. Joint modeling for chinese event extraction with rich linguistic features. In COLING. Quang Xuan Do, Wei Lu, and Dan Roth. 2012. Joint inference for event timeline construction. In EMNLP. Tao Ge, Baobao Chang, Sujian Li, and Zhifang Sui. 2013. Event-based time label propagation for automatic dating of news articles. In EMNLP. David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2003. English gigaword. Linguistic Data Consortium, Philadelphia. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu’s english ace 2005 system description. In ACE 2005 Evaluation Workshop. Qi He, Kuiyu Chang, and Ee-Peng Lim. 2007. Using burstiness to improve clustering of topics in news streams. In ICDM. Yu Hong, Jianfeng Zhang, Bin Ma, Jian-Min Yao, Guodong Zhou, and Qiaoming Zhu. 2011. Using cross-entity inference to improve event extraction. In ACL. Po Hu, Minlie Huang, Peng Xu, Weichang Li, Adam K Usadi, and Xiaoyan Zhu. 2011. Generating breakpoint-based timeline overview for news topic retrospection. In ICDM. Apache Jakarta. 2004. Apache lucene-a highperformance, full-featured text search engine library. Heng Ji and Ralph Grishman. 2008. Refining event extraction through cross-document inference. In ACL. Heng Ji, Ralph Grishman, Zheng Chen, and Prashant Gupta. 2009. Cross-document event extraction and tracking: Task, evaluation, techniques and challenges. In RANLP. Thorsten Joachims. 2006. Training linear svms in linear time. In SIGKDD. Jon Kleinberg. 2003. Bursty and hierarchical structure in streams. Data Mining and Knowledge Discovery, 7(4):373–397. Jiwei Li and Claire Cardie. 2014. Timeline generation: Tracking individuals on twitter. In WWW. Jiwei Li and Sujian Li. 2013. Evolutionary hierarchical dirichlet process for timeline summarization. In ACL. Peifeng Li, Guodong Zhou, Qiaoming Zhu, and Libin Hou. 2012. Employing compositional semantics and discourse consistency in chinese event extraction. In EMNLP. Qi Li, Heng Ji, and Liang Huang. 2013. Joint event extraction via structured prediction with global features. In ACL. Shasha Liao and Ralph Grishman. 2010. Using document level cross-event inference to improve event extraction. In ACL. Chen Lin, Chun Lin, Jingxuan Li, Dingding Wang, Yang Chen, and Tao Li. 2012. Generating event storylines from microblogs. In CIKM. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, and David McClosky. 2014. The stanford corenlp natural language processing toolkit. In ACL System Demonstrations. Russell Swan and James Allan. 2000. Automatic generation of overview timelines. In SIGIR. Naushad UzZaman, Hector Llorens, James Allen, Leon Derczynski, Marc Verhagen, and James Pustejovsky. 2012. Tempeval-3: Evaluating events, time expressions, and temporal relations. arXiv preprint arXiv:1206.5333. Marc Verhagen, Inderjeet Mani, Roser Sauri, Robert Knippen, Seok Bae Jang, Jessica Littman, Anna Rumshisky, John Phillips, and James Pustejovsky. 2005. Automating temporal annotation with tarsqi. In ACL demo. Rui Yan, Xiaojun Wan, Jahna Otterbacher, Liang Kong, Xiaoming Li, and Yan Zhang. 2011. Evolutionary timeline summarization: a balanced optimization framework via iterative substitution. In SIGIR. 584 Yiming Yang, Tom Pierce, and Jaime Carbonell. 1998. A study of retrospective and on-line event detection. In SIGIR. Wayne Xin Zhao, Rishan Chen, Kai Fan, Hongfei Yan, and Xiaoming Li. 2012. A novel burst-based text representation model for scalable event detection. In ACL. 585
2015
56
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 586–595, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Context-aware Entity Morph Decoding Boliang Zhang1, Hongzhao Huang1, Xiaoman Pan1, Sujian Li2, Chin-Yew Lin3 Heng Ji1, Kevin Knight4, Zhen Wen5, Yizhou Sun6, Jiawei Han7, Bulent Yener1 1Rensselaer Polytechnic Institute, 2Peking University, 3Microsoft Research Asia, 4University of Southern California 5IBM T. J. Watson Research Center, 6Northeastern University, 7Univerisity of Illinois at Urbana-Champaign 1{zhangb8,huangh9,panx2,jih,yener}@rpi.edu, [email protected], [email protected] [email protected], [email protected], [email protected], [email protected] Abstract People create morphs, a special type of fake alternative names, to achieve certain communication goals such as expressing strong sentiment or evading censors. For example, “Black Mamba”, the name for a highly venomous snake, is a morph that Kobe Bryant created for himself due to his agility and aggressiveness in playing basketball games. This paper presents the first end-to-end context-aware entity morph decoding system that can automatically identify, disambiguate, verify morph mentions based on specific contexts, and resolve them to target entities. Our approach is based on an absolute “cold-start” - it does not require any candidate morph or target entity lists as input, nor any manually constructed morph-target pairs for training. We design a semi-supervised collective inference framework for morph mention extraction, and compare various deep learning based approaches for morph resolution. Our approach achieved significant improvement over the state-of-the-art method (Huang et al., 2013), which used a large amount of training data. 1 1 Introduction Morphs (Huang et al., 2013; Zhang et al., 2014) refer to the fake alternative names created by social media users to entertain readers or evade censors. For example, during the World Cup in 2014, 1The data set and programs are publicly available at: http://nlp.cs.rpi.edu/data/morphdecoding.zip and http://nlp.cs.rpi.edu/software/morphdecoding.tar.gz a morph “Su-tooth” was created to refer to the Uruguay striker “Luis Suarez” for his habit of biting other players. Automatically decoding humangenerated morphs in text is critical for downstream deep language understanding tasks such as entity linking and event argument extraction. However, even for human, it is difficult to decode many morphs without certain historical, cultural, or political background knowledge (Zhang et al., 2014). For example, “The Hutt” can be used to refer to a fictional alien entity in the Star Wars universe (“The Hutt stayed and established himself as ruler of Nam Chorios”), or the governor of New Jersey, Chris Christie (“The Hutt announced a bid for a seat in the New Jersey General Assembly”). Huang et al. (2013) did a pioneering pilot study on morph resolution, but their approach assumed the entity morphs were already extracted and used a large amount of labeled data. In fact, they resolved morphs on corpus-level instead of mention-level and thus their approach was context-independent. A practical morph decoder, as depicted in Figure 1, consists of two problems: (1) Morph Extraction: given a corpus, extract morph mentions; and (2). Morph Resolution: For each morph mention, figure out the entity that it refers to. In this paper, we aim to solve the fundamental research problem of end-to-end morph decoding and propose a series of novel solutions to tackle the following challenges. Challenge 1: Large-scope candidates Only a very small percentage of terms can be used as morphs, which should be interesting and fun. As we annotate a sample of 4, 668 Chinese weibo tweets, only 450 out of 19, 704 unique terms are morphs. To extract morph mentions, we propose a 586 !)$, #!"? (Conquer West King from Chongqing fell from power, do we still need to sing red songs?) 0. (Buhou and Little Brother Ma.)  ! -!!  ! ! (Attention! Chongqing Conquer West King! Attention! Brother Jun!) 672<, %4  ,  !. (Wu Sangui met Wei Xiaobao, and led the army of Qing dynasty into China, and then became Conquer West King.) ): (Bo Xilai) # (Ma Ying-jeou) ! (Wang Lijun) Tweets Target Entities d1 d2 d3 d4 Figure 1: An Illustration of Morph Decoding Task. two-step approach to first identify individual mention candidates to narrow down the search scope, and then verify whether they refer to morphed entities instead of their original meanings. Challenge 2: Ambiguity, Implicitness, Informality Compared to regular entities, many morphs contain informal terms with hidden information. For example, “不厚(not thick)” is used to refer to “薄熙来(Bo Xilai)” whose last name “薄(Bo)” means “thin”. Therefore we attempt to model the rich contexts with careful considerations for morph characteristics both globally (e.g., language models learned from a large amount of data) and locally (e.g. phonetic anomaly analysis) to extract morph mentions. For morph resolution, the main challenge lies in that the surface forms of morphs usually appear quite different from their target entity names. Based on the distributional hypothesis (Harris, 1954) which states that words that often occur in similar contexts tend to have similar meanings, we propose to use deep learning techniques to capture and compare the deep semantic representations of a morph and its candidate target entities based on their contextual clues. For example, the morph “平西王(Conquer West King)” and its target entity “薄熙来(Bo Xilai)” share similar implicit contextual representations such as “重庆(Chongqing)” (Bo was the governor of Chongqing) and “倒台 (fall from power)”. Challenge 3: Lack of labeled data To the best of our knowledge, no sufficient mention-level morph annotations exist for training an end-to-end decoder. Manual morph annotations require native speakers who have certain cultural background (Zhang et al., 2014). In this paper we focus on exploring novel approaches to save annotation cost in each step. For morph extraction, based on the observation that morphs tend to share similar characteristics and appear together, we propose a semi-supervised collective inference approach to extract morph mentions from multiple tweets simultaneously. Deep learning techniques have been successfully used to model word representation in an unsupervised fashion. For morph resolution, we make use of a large amount of unlabeled data to learn the semantic representations of morphs and target entities based on the unsupervised continuous bag-of-words method (Mikolov et al., 2013b). 2 Problem Formulation Following the recent work on morphs (Huang et al., 2013; Zhang et al., 2014), we use Chinese Weibo tweets for experiments. Our goal is to develop an end-to-end system that automatically extract morph mentions and resolve them to their target entities. Given a corpus of tweets D = {d1, d2, ..., d|D|}, we define a candidate morph mi as a unique term tj in T, where T = {t1, t2, ..., t|T|} is the set of unique terms in D. To extract T, we first apply several well-developed Natural Language Processing tools, including Stanford Chinese word segmenter (Chang et al., 2008), Stanford part-ofspeech tagger (Toutanova et al., 2003) and Chinese lexical analyzer ICTCLAS (Zhang et al., 2003), to process the tweets and identify noun phrases. Then we define a morph mention mp i of mi as the p-th occurrence of mi in a specific document dj. Note that a mention with the same surface form as mi but referring to its original entity is not considered as a morph mention. For instance, the “平西 王(Conquer West King)” in d1 and d3 in Figure 1 are morph mentions since they refer to the modern politician “薄熙来(Bo Xilai)”, while the one in d4 is not a morph mention since it refers to the original entity, who was king “吴三桂(Wu Sangui)”. For each morph mention, we discover a list of target candidates E = {e1, e2, ..., e|E|} from Chinese web data for morph mention resolution. We 587 design an end-to-end morph decoder which consists of the following procedure: • Morph Mention Extraction – Potential Morph Discovery: This first step aims to obtain a set of potential entity-level morphs M = {m1, m2, ...}(M ⊆T). Then, we only verify and resolve the mentions of these potential morphs, instead of all the terms in T in a large corpus. – Morph Mention Verification: In this step, we aim to verify whether each mention mp i of the potential morph mi(mi ∈M) from a specific context dj is a morph mention or not. • Morph Mention Resolution: The final step is to resolve each morph mention mp i to its target entity (e.g., “薄熙来(Bo Xilai)” for the morph mention “平西王(Conquer West King)” in d1 in Figure 1). 3 Morph Mention Extraction 3.1 Why Traditional Entity Mention Extraction doesn’t Work In order to automatically extract morph mentions from any given documents, our first reflection is to formulate the task as a sequence labeling problem, just like labeling regular entity mentions. We adopted the commonly used conditional random fields (CRFs) (Lafferty et al., 2001) and got only 6% F-score. Many morphs are not presented as regular entity mentions. For example, the morph “天线(Antenna)” refers to “温家宝(Wen Jiabao)” because it shares one character “宝(baby)” with the famous children’s television series “天 线宝宝(Teletubbies)”. Even when they are presented as regular entity mentions, they must refer to new target entities which are different from the regular ones. So we propose the following novel two-step solution. 3.2 Potential Morph Discovery We first introduce the first step of our approach – potential morph discovery, which aims to narrow down the scope of morph candidates without losing recall. This step takes advantage of the common characteristics shared among morphs and identifies the potential morphs using a supervised method, since it is relatively easy to collect a certain number of corpus-level morphs as training data compared to labeling morph mentions. Through formulating this task as a binary classification problem, we adopt the Support Vector Machines (SVMs) (Cortes and Vapnik, 1995) as the learning model. We propose the following four categories of features. Basic: (i) character unigram, bigram, trigram, and surface form; (ii) part-of-speech tags; (iii) the number of characters; (iv) whether some characters are identical. These basic features will help identify several common characteristics of morph candidates (e.g., they are very likely to be nouns, and very unlikely to contain single characters). Dictionary: Many morphs are non-regular names derived from proper names while retaining some characteristics. For example, the morphs “薄督(Governor Bo)” and “吃省(Gourmand Province)” are derived from their target entity names “薄熙来(Bo Xilai)” and “广东省(Guandong Province)”, respectively. Therefore, we adopt a dictionary of proper names (Li et al., 2012) and propose the following features: (i) Whether a term occurs in the dictionary. (ii) Whether a term starts with a commonly used last name, and includes uncommonly used characters as its first name. (iii) Whether a term ends with a geopolitical entity or organization suffix word, but it’s not in the dictionary. Phonetic: Many morphs are created based on phonetic (Chinese pinyin in our case) modifications. For instance, the morph “饭饼饼(Rice Cake)” has the same phonetic transcription as its target entity name “范冰冰(Fan Bingbing)”. To extract phonetic-based features, we compile a dictionary composed of ⟨phonetic transcription, term⟩pairs from the Chinese Gigaword corpus 2. Then for each term, we check whether it has the same phonetic transcription as any entry in the dictionary but they include different characters. Language Modeling: Many morphs rarely appear in a general news corpus (e.g., “六步郎 (Six Step Man)” refers to the NBA baseketball player “勒布朗·詹姆斯(Lebron James)”.). Therefore, we use the character-based language models trained from Gigaword to calculate the occurrence probabilities of each term, and use n-gram probabilities (n ∈[1 : 5]) as features. 3.3 Morph Mention Verification The second step is to verify whether a mention of the discovered potential morphs is indeed used as a morph in a specific context. Based on the ob2https://catalog.ldc.upenn.edu/LDC2011T07 588 servation that closely related morph mentions often occur together, we propose a semi-supervised graph-based method to leverage a small set of labeled seeds, coreference and correlation relations, and a large amount of unlabeled data to perform collective inference and thus save annotation cost. According to our observation of morph mentions, we propose the following two hypotheses: Hypothesis 1: If two mentions are coreferential, then they both should either be morph mentions or non-morph mentions. For instance, the morph mentions “平西王(Conquer West King)” in d1 and d3 in Figure 1 are coreferential, they both refer to the modern politician “薄熙来(Bo Xilai)”. Hypothesis 2: Those highly correlated mentions tend to either be morph mentions or nonmorph mentions. From our annotated dataset, 49% morph mentions co-occur on tweet level. For example, “平西王(Conquer West King)” and “军 哥(Brother Jun)” are used together in d3 in Figure 1. Based on these hypotheses, we aim to design an effective approach to compensate for the limited annotated data. Graph-based semi-supervised learning approaches (Zhu et al., 2003; Smola and Kondor, 2003; Zhou et al., 2004) have been successfully applied many NLP tasks (Niu et al., 2005; Chen et al., 2006; Huang et al., 2014). Therefore we build a mention graph to capture the semantic relatedness (weighted arcs) between potential morph mentions (nodes) and propose a semi-supervised graph-based algorithm to collectively verify a set of relevant mentions using a small amount of labeled data. We now describe the detailed algorithm as follows. Mention Graph Construction First, we construct a mention graph that can reflect the association between all the mentions of potential morphs. According to the above two hypotheses, mention coreference and correlation relations are the basis to build our mention graph, which is represented by a matrix. In Chinese Weibo, their exist rich and clean social relations including authorship, replying, retweeting, or user mentioning relations. We make use of these social relations to judge the possibility of two mentions of the same potential morph being coreferential. If there exists one social relation between two mentions mp i and mq i of the morph mi, they are usually coreferential and assigned an association score 1. We also detect coreferential relations by performing content similarity analysis. The cosine similarity is adopted with the tf-idf representation for the contexts of two mentions. Then we get a coreference matrix W 1: W 1 mp i ,mq i =          1.0 if mp i and mq i are linked with certain social relation cos(mp i , mq i ) else if q ∈kNN(p) 0 Otherwise where mp i and mq i are two mentions from the same potential morph mi, and kNN means that each mention is connected to its k nearest neighboring mentions. Users tend to use morph mentions together to achieve their communication goals. To incorporate such evidence, we measure the correlation between two mentions mp i and mq j of two different potential morphs mi and mj as corr(mp i , mq j) = 1.0 if there exists a certain social relation between them. Otherwise, corr(mp i , mq j) = 0. Then we can obtain the correlation matrix: W 2 mp i ,mq j = corr(mp i , mq j). To tune the balance of coreferential relation and correlation relation during learning, we first get two matrices ˆW 1 and ˆW 2 by row-normalizing W 1 and W2, respectively. Then we obtain the final mention matrix W with a linear combination of ˆW 1 and ˆW 2: W = α ˆW 1 + (1 −α) ˆW 2, where α is the coefficient between 0 and 1 3. Graph-based Semi-supervised Learning Intuitively, if two mentions are strongly connected, they tend to hold the same label. The label of 1 indicates a mention is a morph mention, and 0 means a non-morph mention. We use Y =  Yl Yu T to denote the label vector of all mentions, where the first l nodes are verified mentions labeled as 1 or 0, and the remaining u nodes need to be verified and initialized with the label 0.5. Our final goal is to obtain the final label vector Yu by incorporating evidence from initial labels and the mention graph. Following the graph-based semi-supervised learning algorithm (Zhu et al., 2003), the mention verification problem is formulated to optimize the objective function Q(Y) = µ Pl i=1(yi −y0 i )2 + 1 2 P i,j Wij(yi −yj)2 where y0 i denotes the initial 3α is set to 0.8 in this paper, optimized from the development set. 589 label, and µ is a regularization parameter that controls the trade-off between initial labels and the consistency of labels on the mention graph. Zhu et al. (2003) has proven that this formula has both closed-form and iterative solutions. 4 Morph Mention Resolution The final step is to resolve the extracted morph mentions to their target entities. 4.1 Candidate Target Identification We start from identifying a list of target candidates for each morph mention from the comparable corpora including Sina Weibo, Chinese News and English Twitter. After preprocessing the corpora using word segmentation, noun phrase chunking and name tagging, the name entity list is still too large and too noisy for candidate ranking. To clean the name entity list, we adopt the temporal Distribution Assumption proposed in our recent work (Huang et al., 2013). It assumes that a morph m and its real target e should have similar temporal distributions in terms of their occurrences. Following the same heuristic we assume that an entity is a valid candidate for a morph if and only if the candidate appears fewer than seven days after the morph’s appearance. 4.2 Candidate Target Ranking Motivations of Using Deep Learning Compared to regular entity linking tasks (Ji et al., 2010; Ji et al., 2011; Ji et al., 2014), the major challenge of ranking a morph’s candidate target entities lies in that the surface features such as the orthographic similarity between morph and target candidates have been proven inadequate (Huang et al., 2013). Therefore, it is crucial to capture the semantics of both mentions and target candidates. For instance, in order to correctly resolve “平西王(Conquer West King)” from d1 and d3 in Figure 1 to the modern politician “薄熙来(Bo Xilai)” instead of the ancient king “吴三桂(Wu Sangui)”, it is important to model the surrounding contextual information effectively to capture important information (e.g., “重庆(Chongqing)”, “倒台(fall from power)”, and “唱红歌(sing red songs)”) to represent the mentions and target entity candidates. Inspired by the recent success achieved by deep learning based techniques on learning semantic representations for various NLP tasks (e.g., (Bengio et al., 2003; Collobert et al., 2011; Mikolov et al., 2013b; He et al., 2013)), we design and compare the following two approaches to employ hierarchical architectures with multiple hidden layers to extract useful features and map morphs and target entities into a latent semantic space. Pairwise Cross-genre Supervised Learning Ideally, we hope to obtain a large amount of coreferential entity mention pairs for training. A natural knowledge resource is Wikipedia which includes anchor links. We compose an anchor’s surface string and the title of the entity it’s linked to as a positive training pair. Then we randomly sample negative training instances from those pairs that don’t share any links. Our approach consists of the following steps: (1) generating high quality embedding for each training instance; (2) pre-training with the stacked denoising auto-encoder (Bengio et al., 2003) for feature dimension reduction; and (3) supervised fine-tuning to optimize the neural networks towards a similarity measure (e.g., dot product). Figure 2 depicts the overall architecture of this approach. n layers stacked auto-encoders pair-wise supervised fine-tuning layer …. …. sim(m,c) = Dot( f (m), f (c)) f f mention candidate target Figure 2: Overall Architecture of Pairwise Crossgenre Supervised Learning However, morph resolution is significantly different from the traditional entity linking task since the latter mainly focuses on formal and explicit entities (e.g., “薄熙来(Bo Xilai)”) which tend to have stable referents in Wikipedia. In contrast, morphs tend to be informal, implicit and have newly emergent meanings which evolve over time. In fact, these morph mentions rarely appear in Wikipedia. For example, almost all “平西王 (Conquer West King)” mentions in Wikipedia refer to the ancient king instead of the modern politician “薄熙来(Bo Xilai)”. In addition, the contextual words in Wikipedia used to describe entities are quite different from those in social media. For example, to describe a death event, Wikipedia usu590 ally uses a formal expression “去世(pass away)” while an informal expression “挂了(hang up)” is used more often in tweets. Therefore this approach suffers from the knowledge discrepancy between these two genres. Within-genre Unsupervised Learning context([already]) Input Layer context([fell from power]) context([sing]) context([red song]) Projection Layer Xw summation Output Layer σ (Xw Tθ) Figure 3: Continuous Bag-of-Words Architecture To address the above challenge, we propose the second approach to learn semantic embeddings of both morph mentions and entities directly from tweets. Also we prefer unsupervised learning methods due to the lack of training data. Following (Mikolov et al., 2013a), we develop a continuous bag-of-words (CBOW) model that can effectively model the surrounding contextual information. CBOW is discriminatively trained by maximizing the conditional probability of a term wi given its contexts c(wi) = {wi−n, ..., wi−1, wi+1, ..., wi+n}, where n is the contextual window size, and wi is a term obtained using the preprocessing step introduced in Section 2 4. The architecture of CBOW is depicted in Figure 3. We obtain a vector Xwi through the projection layer by summing up the embedding vectors of all terms in c(wi), and then use the sigmoid activation function to obtain the final embedding of wi in c(wi) in the output layer. Formally, the objective function of CBOW can be formulated as L(θ) = P wi∈W P wj∈W log p(wj|c(wi)), where W is the set of unique terms obtained from the whole training corpus. p(wj|c(wi)) is the conditional likelihood of wj given the context c(wi) and it is formulated as follows: p(wj|c(wi)) = [σ(XT wiθwj)]Lwi(wj) × [1 −σ(XT wiθwj)]1−Lwi(wj), 4Each wi is not limited to noun phrases we consider as candidate morphs. Data Training Development Testing # Tweets 1,500 500 2,688 # Unique Terms 10,098 4, 848 15,108 # Morphs 250 110 341 # Morph Mentions 1,342 487 2,469 Table 1: Data Statistics where Lwi(wj) =  1, wi = wj 0, Otherwise , σ is the sigmoid activation function, and θwi is the embeddings of wi to be learned with back-propagation during training. 5 Experiments 5.1 Data We retrieved 1,553,347 tweets from Chinese Sina Weibo from May 1 to June 30, 2013 and 66, 559 web documents from the embedded URLs in tweets for experiments. We then randomly sampled 4, 688 non-redundant tweets and asked two Chinese native speakers to manually annotate morph mentions in these tweets. The annotated dataset is randomly split into training, development, and testing sets, with detailed statistics shown in Table 1 5. We used 225 positive instances and 225 negative instances to train the model in the first step of potential morph discovery. We collected a Chinese Wikipedia dump of October 9th, 2014, which contains 2,539,355 pages. We pulled out person, organization and geopolitical pages based on entity type matching with DBpedia 6. We also filter out the pages with fewer than 300 words. For training the model, we use 60,000 mention-target pairs along with one negative sample randomly generated for each pair, among which, 20% pairs are reserved for parameter tuning. 5.2 Overall: End-to-End Decoding In this subsection, we first study the end-to-end decoding performance of our best system, and compare it with the state-of-the-art supervised learning-to-rank approach proposed by (Huang et al., 2013) based on information networks construction and traverse with meta-paths. We use the 225 extracted morphs as input to feed (Huang et al., 2013) system. The experiment setting, implementation and evaluation process are similar to (Huang et al., 2013). 5We will make all of these annotations and other resources available for research purposes if this paper gets accepted. 6http://dbpedia.org 591 The overall performance of our approach using within-genre learning for resolution is shown in Table 2. We can see that our system achieves significantly better performance (95.0% confidence level by the Wilcoxon Matched-Pairs Signed-Ranks Test) than the approach proposed by (Huang et al., 2013). We found that (Huang et al., 2013) failed to resolve many unpopular morphs (e.g., “小马(Little Ma)” is a morph referring to Ma Yingjiu, and it only appeared once in the data), because it heavily relies on aggregating contextual and temporal information from multiple instances of each morph. In contrast, our unsupervised resolution approach only leverages the pre-trained word embeddings to capture the semantics of morph mentions and entities. Model Precision Recall F1 Huang et al., 2013 40.2 33.3 36.4 Our Approach 41.1 35.9 38.3 Table 2: End-to-End Morph Decoding (%) 5.3 Diagnosis: Morph Mention Extraction The first step discovered 888 potential morphs (80.1% of all morphs, 5.9% of all terms), which indicates that this step successfully narrowed down the scope of candidate morphs. Method Precision Recall F1 Naive 58.0 83.1 68.3 SVMs 61.3 80.7 69.7 Our Approach 88.2 77.2 82.3 Table 3: Morph Mention Verification (%) Now we evaluate the performance of morph mention verification. We compare our approach with two baseline methods: (i) Naive, which considers all mentions as morph mentions; (ii) SVMs, a fully supervised model using Support Vector Machines (Cortes and Vapnik, 1995) based on unigrams and bigrams features. Table 3 shows the results. We can see that our approach achieves significantly better performance than the baseline approaches. In particular it can verify the mentions of newly emergent morphs. For instance, “棒棒 棒(Good Good Good)” is mistakenly identified by the first step as a potential morph, but the second step correctly filters it out. 5.4 Diagnosis: Morph Mention Resolution The target candidate identification step successfully filters 86% irrelevant entities with high precision (98.5% of morphs retain their target entitis). For candidate ranking, we compare with several baseline approaches as follows: • BOW: We compute cosine similarity over bagof-words vectors with tf-idf values to measure the context similarity between a mention and its candidates. • Pair-wise Cross-genre Supervised Learning: We first construct a vocabulary by choosing the top 100,000 frequent terms. Then we randomly sample 48,000 instances for training and 12,000 instances for development. At the pre-training step, we set the number of hidden layers as 3, the size of each hidden layer as 1000, the masking noise probability for the first layer as 0.7, and a Gaussian noise with standard deviation of 0.1 for higher layers. The learning rate is set to be 0.01. At the fine-tuning stage, we add a 200 units layer on top of auto-encoders and optimize the neural network models based on the training data. • Within-genre Unsupervised Learning: We directly train morph mention and entity embeddings from the large-scale tweets and web documents that we collect. We set the window size as 10 and the vector dimension as 800 based on the development set. The overall performance of various resolution approaches using perfect morph mentions is shown in Figure 4. We can clearly see that our second within-genre learning approach achieves the best performance. Figure 5 demonstrates the differences between our two deep learning based methods. When learning semantic embeddings directly from Wikipedia, we can see that the top 10 closest entities of the mention “平西王(Conquer West King)” are all related to the ancient king “吴 三桂(Wu Sangui)”. Therefore this method is only able to capture the original meanings of morphs. In contrast, when we learn embeddings directly from tweets, most of the closest entities are relevant to its target entity “薄熙来(Bo Xilai)”. 6 Related Work The first morph decoding work (Huang et al., 2013) assumed morph mentions are already discovered and didn’t take contexts into account. To the best of our knowledge, this is the first work on context-aware end-to-end morph decoding. Morph decoding is related to several traditional 592 3 (Eight Beauties) ' (Surrender to Qing Dynasty) 8; (Qinhuai)  (Army of Qing) 1644 (Year 1644)  (Break the Defense) 82 (Fall of Qin Dynasty) /++ (Chen Yuanyuan) 4 6 (Wu Sangui) 1 (Entitled as) ), (Bo Yibo) . (Manchuria) BXL (Bo Xilai) ! (Wang Lijun)  (Wen Qiang) 82 (Fall of Qin Dynasty) " (Zhang Dejiang) -! (King of Han) ) (Bo) 4 6 (Wu Sangui) 5( (Violation of Rules)  (Be Distinguished) BXL (Bo Xilai)  (Suppress Gangster) ! (Wang Lijun) 2* (Murdering Case) " (Zhang Dejiang) &$·9" (Neil Heywood) 7 (Huang Qifan) % (Introduce Investment) “,(Conquer West King)” in Wikipedia “,(Conquer West King)” in tweets “/:(Bo Xilai)” in tweets/web docs Figure 5: Top 10 closest entities to morph and target in different genres Figure 4: Resolution Acc@K for Perfect Morph Mentions NLP tasks: entity mention extraction (e.g., (Zitouni and Florian, 2008; Ohta et al., 2012; Li and Ji, 2014)), metaphor detection (e.g., (Wang et al., 2006; Tsvetkov, 2013; Heintz et al., 2013)), word sense disambiguation (WSD) (e.g., (Yarowsky, 1995; Mihalcea, 2007; Navigli, 2009)), and entity linking (EL) (e.g., (Mihalcea and Csomai, 2007; Ji et al., 2010; Ji et al., 2011; Ji et al., 2014). However, none of these previous techniques can be applied directly to tackle this problem. As mentioned in section 3.1, entity morphs are fundamentally different from regular entity mentions. Our task is also different from metaphor detection because morphs cover a much wider range of semantic categories and can include either abstractive or concrete information. Some common features for detecting metaphors (e.g. (Tsvetkov, 2013)) are not effective for morph extraction: (1). Semantic categories. Metaphors usually fall into certain semantic categories such as noun.animal and noun.cognition. (2). Degree of abstractness. If the subject or an object of a concrete verb is abstract then the verb is likely to be a metaphor. In contrast, morphs can be very abstract (e.g., “函 数(Function)” refers to “杨幂(Yang Mi)” because her first name “幂(Mi)” means the Power Function) or very concrete (e.g., “薄督(Governor Bo)” refers to “薄熙来(Bo Xilai)”). In contrast to traditional WSD where the senses of a word are usually quite stable, the “sense” (target entity) of a morph may be newly emergent or evolve over time rapidly. The same morph can also have multiple senses. The EL task focuses more on explicit and formal entities (e.g., named entities), while morphs tend to be informal and convey implicit information. Morph mention detection is also related to malware detection (e.g., (Firdausi et al., 2010; Chandola et al., 2009; Firdausi et al., 2010; Christodorescu and Jha, 2003)) which discovers abnormal behavior in code and malicious software. In contrast our task tackles anomaly texts in semantic context. Deep learning-based approaches have been demonstrated to be effective in disambiguation related tasks such as WSD (Bordes et al., 2012), entity linking (He et al., 2013) and question linking (Yih et al., 2014; Bordes et al., 2014; Yang et al., 2014). In this paper we proved that it’s cru593 cial to keep the genres consistent between learning embeddings and applying embeddings. 7 Conclusions and Future Work This paper describes the first work of contextaware end-to-end morph decoding. By conducting deep analysis to identity the common characteristics of morphs and the unique challenges of this task, we leverage a large amount of unlabeled data and the coreferential and correlation relations to perform collective inference to extract morph mentions. Then we explore deep learning-based techniques to capture the semantics of morph mentions and entities and resolve morph mentions on the fly. Our future work includes exploiting the profiles of target entities as feedback to refine the results of morph mention extraction. We will also extend the framework for event morph decoding. Acknowledgments This work was supported by the US ARL NS-CTA No. W911NF-09-2-0053, DARPA DEFT No. FA8750-13-2-0041, NSF Awards IIS-1523198, IIS-1017362, IIS-1320617, IIS1354329 and HDTRA1-10-1-0120, gift awards from IBM, Google, Disney and Bosch. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References Y. Bengio, R. Ducharme, P. Vincent, and C. Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155, March. A. Bordes, X. Glorot, J. Weston, and Y. Bengio. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In Proc. of the 15th International Conference on Artificial Intelligence and Statistics (AISTATS2012). A. Bordes, S. Chopra, and J. Weston. 2014. Question answering with subgraph embeddings. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). V. Chandola, A. Banerjee, and V. Kumar. 2009. Anomaly detection: A survey. ACM Computing Surveys (CSUR), 41(3):15. P. Chang, M. Galley, and D. Manning. 2008. Optimizing chinese word segmentation for machine translation performance. In Proc. of the Third Workshop on Statistical Machine Translation (StatMT 2008). J. Chen, D. Ji, C Tan, and Z. Niu. 2006. Relation extraction using label propagation based semisupervised learning. In Proc. of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (ACL2006). M. Christodorescu and S. Jha. 2003. Static analysis of executables to detect malicious patterns. In Proc. of the 12th Conference on USENIX Security Symposium (SSYM2003). R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537, November. C. Cortes and V. Vapnik. 1995. Support-vector networks. Machine Learning, 20:273–297, September. I. Firdausi, C. Lim, A. Erwin, and A. Nugroho. 2010. Analysis of machine learning techniques used in behavior-based malware detection. In Proc. of the 2010 Second International Conference on Advances in Computing, Control, and Telecommunication Technologies (ACT2010). Z. Harris. 1954. Distributional structure. Word, 10:146–162. Z. He, S. Liu, M. Li, M. Zhou, L. Zhang, and H. Wang. 2013. Learning entity representation for entity disambiguation. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (ACL2013). I. Heintz, R. Gabbard, M. Srivastava, D. Barner, D. Black, M. Friedman, and R. Weischedel. 2013. Automatic extraction of linguistic metaphors with lda topic modeling. In Proc. of the ACl2013 Workshop on Metaphor in NLP. H. Huang, Z. Wen, D. Yu, H. Ji, Y. Sun, J. Han, and H. Li. 2013. Resolving entity morphs in censored data. In Proc. of the 51st Annual Meeting of the Association for Computational Linguistics (ACL2013). H. Huang, Y. Cao, X. Huang, H. Ji, and C. Lin. 2014. Collective tweet wikification based on semisupervised graph regularization. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL2014). H. Ji, R. Grishman, H.T. Dang, K. Griffitt, and J. Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Proc. of the Text Analysis Conference (TAC2010). H. Ji, R. Grishman, and H.T. Dang. 2011. Overview of the tac 2011 knowledge base population track. In Proc. of the Text Analysis Conference (TAC2011). 594 H. Ji, J. Nothman, and H. Ben. 2014. Overview of tackbp2014 entity discovery and linking tasks. In Proc. of the Text Analysis Conference (TAC2014). J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. of the Eighteenth International Conference on Machine Learning (ICML2001). Q. Li and H. Ji. 2014. Incremental joint extraction of entity mentions and relations. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL2014). Q. Li, H. Li, H. Ji, W. Wang, J. Zheng, and F. Huang. 2012. Joint bilingual name tagging for parallel corpora. In Proc. of the 21st ACM International Conference on Information and Knowledge Management (CIKM2012). R. Mihalcea and A. Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proc. of the sixteenth ACM conference on Conference on information and knowledge management (CIKM2007). R. Mihalcea. 2007. Using wikipedia for automatic word sense disambiguation. In Proc. of the Conference of the North American Chapter of the Association for Computational Linguistics (HLTNAACL2007). T. Mikolov, K. Chen, G. Corrado, and J. Dean. 2013a. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. T. Mikolov, I. Sutskever, K. Chen, S.G. Corrado, and J. Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26. R. Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys, 41:10:1–10:69, February. Z. Niu, D. Ji, and C. Tan. 2005. Word sense disambiguation using label propagation based semisupervised learning. In Proc. of the 43rd Annual Meeting of the Association for Computational Linguistics (ACL2005). T. Ohta, S. Pyysalo, J. Tsujii, and S. Ananiadou. 2012. Open-domain anatomical entity mention detection. In Proc. of the ACL2012 Workshop on Detecting Structure in Scholarly Discourse. A. Smola and R. Kondor. 2003. Kernels and regularization on graphs. In Proc. of the Annual Conference on Computational Learning Theory and Kernel Workshop (COLT2003). K. Toutanova, D. Klein, C. D. Manning, and Y. Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proc. of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology (NAACL2003). Y. Tsvetkov. 2013. Cross-lingual metaphor detection using common semantic features. In Proc. of the ACL2013 Workshop on Metaphor in NLP. Z. Wang, H. Wang, H. Duan, S. Han, and S. Yu. 2006. Chinese noun phrase metaphor recognition with maximum entropy approach. In Proc. of the Seventh International Conference on Intelligent Text Processing and Computational Linguistics (CICLing2006). M. Yang, N. Duan, M. Zhou, and H. Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proc. of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP2014). D. Yarowsky. 1995. Unsupervised word sense disambiguation rivaling supervised methods. In Proc. of the 33rd Annual Meeting on Association for Computational Linguistics (ACL1995). W. Yih, X. He, and C. Meek. 2014. Semantic parsing for single-relation question answering. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL2014). H. Zhang, H. Yu, D. Xiong, and Q. Liu. 2003. Hhmmbased chinese lexical analyzer ictclas. In Proc. of the second SIGHAN workshop on Chinese language processing (SIGHAN2003). B. Zhang, H. Huang, X. Pan, H. Ji, K. Knight, Z. Wen, Y. Sun, J. Han, and B. Yener. 2014. Be appropriate and funny: Automatic entity morph encoding. In Proc. of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL2014). D. Zhou, O. Bousquet, T. Lal, J. Weston, and B. Sch¨olkopf. 2004. Learning with local and global consistency. In Advances in Neural Information Processing Systems 16, pages 321–328. X. Zhu, Z. Ghahramani, and J. Lafferty. 2003. Semisupervised learning using gaussian fields and harmonic functions. In Proc. of the International Conference on Machine Learning (ICML2003). I. Zitouni and R. Florian. 2008. Mention detection crossing the language barrier. In Proc. of the Conference on Empirical Methods in Natural Language Processing (EMNLP2008). 595
2015
57
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 596–605, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Multi-Objective Optimization for the Joint Disambiguation of Nouns and Named Entities Dirk Weissenborn, Leonhard Hennig, Feiyu Xu and Hans Uszkoreit Language Technology Lab, DFKI Alt-Moabit 91c Berlin, Germany {dirk.weissenborn, leonhard.hennig, feiyu, uszkoreit}@dfki.de Abstract In this paper, we present a novel approach to joint word sense disambiguation (WSD) and entity linking (EL) that combines a set of complementary objectives in an extensible multi-objective formalism. During disambiguation the system performs continuous optimization to find optimal probability distributions over candidate senses. The performance of our system on nominal WSD as well as EL improves state-ofthe-art results on several corpora. These improvements demonstrate the importance of combining complementary objectives in a joint model for robust disambiguation. 1 Introduction The task of automatically assigning the correct meaning to a given word or entity mention in a document is called word sense disambiguation (WSD) (Navigli, 2009) or entity linking (EL) (Bunescu and Pasca, 2006), respectively. Successful disambiguation requires not only an understanding of the topic or domain a document is dealing with, but also a deep analysis of how an individual word is used within its local context. For example, the meanings of the word “newspaper”, as in the company or the physical product, often cannot be distinguished by the global topic of the document it was mentioned in, but by recognizing which type of meaning fits best into the local context of its mention. On the other hand, for an ambiguous entity mention such as a person name, e.g., “Michael Jordan”, it is important to recognize the domain or topic of the wider context to distinguish, e.g., between the basketball player and the machine learning expert. The combination of the two most commonly employed reference knowledge bases for WSD and EL, WordNet (Fellbaum, 1998) and Wikipedia, in BabelNet (Navigli and Ponzetto, 2012), has enabled a new line of research towards the joint disambiguation of words and named entities. Babelfy (Moro et al., 2014) has shown the potential of combining these two tasks in a purely knowledge-driven approach that jointly finds connections between potential word senses on a global, document level. On the other hand, typical supervised methods (Zhong and Ng, 2010) trained on sense-annotated datasets are usually quite successful in dealing with individual words in their local context on a sentence level. Hoffart et al. (2011) recognize the importance of combining both local and global context for robust disambiguation. However, their approach is limited to EL and optimization is performed in a discrete setting. We present a system that combines disambiguation objectives for both global and local contexts into a single multi-objective function. The resulting system is flexible and easily extensible with complementary objectives. In contrast to prior work (Hoffart et al., 2011; Moro et al., 2014) we model the problem in a continuous setting based on probability distributions over candidate meanings instead of a binary treatment of candidate meanings during disambiguation. Our approach combines knowledge from various sources in one robust model. The system uses lexical and encyclopedic knowledge for the joint disambiguation of words and named entities, and exploits local context information of a mention to infer the type of its meaning. We integrate prior statistics from surface strings to candidate meanings in a “natural” way as starting probability distributions for each mention. The contributions of our work are the following: • a model for joint nominal WSD and EL that outperforms previous state-of-the-art systems on both tasks • an extensible framework for multi-objective 596 disambiguation • an extensive evaluation of the approach on multiple standard WSD and EL datasets • the first work that employs continuous optimization techniques for disambiguation (to our knowledge) • publicly available code, resources and models at https://bitbucket.org/ dfki-lt-re-group/mood 2 Approach Our system detects mentions in texts and disambiguates their meaning to one of the candidate senses extracted from a reference knowledge base. The integral parts of the system, namely mention detection, candidate search and disambiguation are described in detail in this section. The model requires a tokenized, lemmatized and POS-tagged document as input; the output are sense-annotated mentions. 2.1 Knowledge Source We employ BabelNet 2.5.1 as our reference knowledge base (KB). BabelNet is a multilingual semantic graph of concepts and named entities that are represented by synonym sets, called Babel synsets. It is composed of lexical and encyclopedic resources, such as WordNet and Wikipedia. Babel synsets comprise several Babel senses, each of which corresponds to a sense in another knowledge base. For example the Babel synset of “Neil Armstrong” contains multiple senses including for example “armstrong#n#1” (WordNet), “Neil Armstrong” (Wikipedia). All synsets are interlinked by conceptual-semantic and lexical relations from WordNet and semantic relations extracted from links between Wikipedia pages. 2.2 Mention Extraction & Entity Detection We define a mention to be a sequence of tokens in a given document. The system extracts mentions for all content words (nouns, verbs, adjectives, adverbs) and multi-token units of up to 7 tokens that contain at least one noun. In addition, we apply a NER-tagger to identify named entity (NE) mentions. Our approach distinguishes NEs from common nouns because there are many common nouns also referring to NEs, making disambiguation unnecessarily complicated. For example, the word “moon” might refer to songs, films, video games, etc., but we should only consider these meanings if the occurrence suggests that it is used as a NE. 2.3 Candidate Search After potential mentions are extracted, the system tries to identify their candidate meanings, i.e., the appropriate synsets. Mentions without any candidates are discarded. There are various resources one can exploit to map surface strings to candidate meanings. However, existing methods or resources especially for NEs are either missing many important mappings1 or contain many noisy mappings2. Therefore, we created a candidate mapping strategy that tries to avoid noisy mappings while including all potentially correct candidates. Our approach employs several heuristics that aim to avoid noise. Their union yields an almost complete mapping that includes the correct candidate meaning for 97-100% of the examples in the test datasets. Candidate mentions are mapped to synsets based on similarity of their surface strings or lemmas. If the surface string or lemma of a mention matches the lemma of a synonym in a synset that has the same part of speech, the synset will be considered as a candidate meaning. We allow partial matches for BabelNet synonyms derived from Wikipedia titles or redirections. However, partial matching is restricted to synsets that belong either to the semantic category “Place” or “Agent”. We make use of the semantic category information provided by the DBpedia ontology3. A partial match allows the surface string of a mention to differ by up to 3 tokens from the Wikipedia title (excluding everything in parentheses) if the partial string occurred at least once as an anchor for the corresponding Wikipedia page. E.g., for the Wikipedia title Armstrong School District (Pennsylvania), the following surface strings would be considered matches: “Armstrong School District (Pennsylvania)”, “Armstrong School District”, “Armstrong”, but not “School” or “District”, since they were never used as an anchor. If there is no match we try the same procedure applied to the lowercase forms of the surface string or the lemma. For persons we allow matches to all partial names, e.g., only first name, first and middle name, last name, etc. In addition to the aforementioned candidate extraction we also match surface strings to candidate entities mentioned on their respective disambigua1e.g., using only the synonyms of a synset 2e.g., partial matches for all synonyms of a synset 3http://wiki.dbpedia.org/Ontology 597 tion pages in Wikipedia4. For cases where adjectives should be disambiguated as nouns, e.g., “English” as a country to “England”, we allow candidate mappings through the pertainment relation from WordNet. Finally, frequently annotated surface strings in Wikipedia are matched to their corresponding entities, where we stipulate “frequently” to mean that the surface string occurs at least 100 times as anchor in Wikipedia and the entity was either at least 100 times annotated by this surface string or it was annotated above average. The distinction between nouns and NEs imposes certain restrictions on the set of potential candidates. Candidate synsets for nouns are noun synsets considered as “Concepts” in BabelNet (as opposed to “Named Entities”) in addition to all synsets of WordNet senses. On the other hand, candidate synsets for NEs comprise all nominal Babel synsets. Thus, the range of candidate sets for NEs properly contains the one for nouns. We include all nominal synsets as potential candidates for NEs because the distinction of NEs and simple concepts is not always clear in BabelNet. For example the synset for “UN” (United Nations) is considered a concept whereas it could also be considered a NE. Finally, if there is no candidate for a potential nominal mention, we try to find NE candidates for it before discarding it. 2.4 Multi-Objective Disambiguation We formulate the disambiguation as a continuous, multi-objective optimization problem. Individual objectives model different aspects of the disambiguation problem. Maximizing these objectives means assigning high probabilities to candidate senses that contribute most to the combined objective. After maximization, we select the candidate meaning with the highest probability as the disambiguated sense. Our model is illustrated in Figure 1. Given a set of objectives O the overall objective function O is defined as the sum of all normalized objectives O ∈O given a set of mentions M: O(M) = X O∈O |MO| |M| · O(M) Omax(M) −Omin(M). (1) The continuous approach has several advantages over a discrete setting. First, we can ex4provided by DBpedia at http://wiki.dbpedia. org/Downloads2014 Armstrong - Armstrong_(crater) 0.6 - Neil_Armstrong 0.2 - Louis_Armstrong 0.1 ... jazz - jazz_(music) 0.3 - jazz_(rhetoric) 0.3 - ... Mentions M play - play_(game) 0.4 - play_(instrument) 0.2 - ... Armstrong - Armstrong_(crater) 0.3 - Neil_Armstrong 0.1 - Louis_Armstrong 0.5 - ... Mentions M Objectives . . . While not_converged or i < max_iterations play - play_(game) 0.1 - play_(instrument) 0.6 - ... jazz - jazz_(music) 0.8 - jazz_(rhetoric) 0.1 - ... Figure 1: Illustration of our multi-objective approach to WSD & EL for the example sentence: Armstrong plays jazz. Mentions are disambiguated by iteratively updating probability distributions over their candidate senses with respect to the given objective gradients ∇Oi. ploit well established continuous optimization algorithms, such as conjugate gradient or LBFGS. Second, by optimizing upon probability distributions we are optimizing the actually desired result, in contrast to densest sub-graph algorithms where normalized confidence scores are calculated afterwards, e.g., Moro et al. (2014). Third, discrete optimization usually works on a single candidate per iteration whereas in a continuous setting, probabilities are adjusted for each candidate, which is computationally advantageous for highly ambiguous documents. We normalize each objective using the difference of its maximum and minimum value for a given document, which makes the weighting of the objectives different for each document. The maximum/minimum values can be calculated analytically or, if this is not possible, by running the optimization algorithm with only the given objective for an approximate estimate for the maximum and with its negated form for an approximate minimum. Normalization is important for optimization because it ensures that the individual gradients have similar norms on average for each objective. Without normalization, optimization is biased towards objectives with large gradients. Given that one of the objectives can be applied to only a fraction of all mentions (e.g., only nominal mentions), we scale each objective by the fraction of mentions it is applied to. Note that our formulation could easily be extended to using additional coefficients for each ob598 jective. However, these hyper-parameters would have to be estimated on development data and therefore, this method could hurt generalization. Prior Another advantage of working with probability distributions over candidates is the easy integration of prior information. For example, the word “Paris” without further context has a strong prior on its meaning as a city instead of a person. Our approach utilizes prior information in form of frequency statistics over candidate synsets for a mention’s surface string. These priors are derived from annotation frequencies provided by WordNet and Wikipedia. We make use of occurrence frequencies extracted by DBpedia Spotlight (Daiber et al., 2013) for synsets containing Wikipedia senses in case of NE disambiguation. For nominal WSD, we employ frequency statistics from WordNet for synsets containing WordNet senses. Laplace-smoothing is applied to all prior frequencies. The priors serve as initialization for the probability distributions over candidate synsets. Note that we use priors “naturally”, i.e., as actual priors for initialization only and not during disambiguation itself. They should not be applied during disambiguation because these priors can be very strong and are not domain independent. However, they provide a good initialization which is important for successful continuous optimization. 3 Disambiguation Objectives 3.1 Coherence Objective Jointly disambiguating all mentions within a document has been shown to have a large impact on disambiguation quality, especially for named entities (Kulkarni et al., 2009). It requires a measurement of semantic relatedness between concepts that can for example be extracted from a semantic network like BabelNet. However, semantic networks usually suffer from data sparsity where important links between concepts might be missing. To deal with this issue, we adopt the idea of using semantic signatures from Moro et al. (2014). Following their approach, we create semantic signatures for concepts and named entities by running a random walk with restart (RWR) in the semantic network. We count the times a vertex is visited during RWR and define all frequently visited vertices to be the semantic signature (i.e., a set of highly related vertices) of the starting concept or named entity vertex. Our coherence objective aims at maximizing the semantic relatedness among selected candidate senses based on their semantic signatures Sc. We define the continuous objective using probability distributions pm(c) over the candidate set Cm of each mention m ∈M in a document as follows: Ocoh(M) = X m∈M c∈Cm X m′∈M m′̸=m c′∈Cm′ s(m, c, m′, c′) s(m, c, m′, c′) = pm(c) · pm′(c′) · 1((c, c′) ∈S) pm(c) = ewm,c P c′∈Cm ewm,c′ , (2) where 1 denotes the indicator function and pm(c) is a softmax function. The only free, optimizable parameters are the softmax weights wm. This objective includes all mentions, i.e., MOcoh = M. It can be interpreted as finding the densest subgraph where vertices correspond to mention-candidate pairs and edges to semantic signatures between candidate synsets. However, in contrast to a discrete setup, each vertex is now weighted by its probability and therefore each edge is weighted by the product of its adjacent vertex probabilities. 3.2 Type Objective One of the biggest problems for supervised approaches to WSD is the limited size and synset coverage of available training datasets such as SemCor (Miller et al., 1993). One way to circumvent this problem is to use a coarser set of semantic classes that groups synsets together. Previous studies on using semantic classes for disambiguation showed promising results (IzquierdoBevi´a et al., 2006). For example, WordNet provides a mapping, called lexnames, of synsets into 45 types, which is based on the syntactic categories of synsets and their logical groupings5. In WordNet 13.5% of all nouns are ambiguous with an average ambiguity of 2.79 synsets per lemma. Given a noun and a type (lexname), the percentage of ambiguous nouns drops to 7.1% for which the average ambiguity drops to 2.33. This indicates that exploiting type classification for disambiguation can be very useful. Similarly, for EL it is important to recognize the type of an entity mention in a local context. 5http://wordnet.princeton.edu/man/ lexnames.5WN.html 599 For example, in the phrase “London beats Manchester” it is very likely that the two city names refer to sports clubs and not to the cities. We utilize an existing mapping from Wikipedia pages to types from the DBpedia ontology, restricting the set of target types to the following: “Activity”, “Organisation”, “Person”, “Event”, “Place” and “Misc” for the rest. We train a multi-class logistic regression model for each set of types that calculates probability distributions qm(t) over WN- or DBpedia-types t given a noun- or a NE-mention m, respectively. The features used as input to the model are the following: • word embedding of mention’s surface string • sum of word embeddings of all sentence words excluding stopwords • word embedding of the dependency parse parent • collocations of surrounding words as in Zhong et al. (2010) • POS tags with up to 3 tokens distance to m • possible types of candidate synsets We employed pre-trained word embeddings from Mikolov et al. (2013) instead of the words themselves to increase generalization. Type classification is included as an objective in the model as defined in equation 3. It puts type specific weights derived from type classification on candidate synsets, enforcing candidates of fitting type to have higher probabilities. The objective is only applied to noun, NE and verb mentions, i.e., MOtyp = Mn ∪MNE ∪Mv. Otyp(M) = X m∈MOtyp X c∈Cm qm(tc) · pm(c) (3) 3.3 Regularization Objective Because candidate priors for NE mentions can be very high, we add an additional L2-regularization objective for NE mentions: OL2(M) = −λ 2 X m∈MNE ∥wm∥2 2 (4) The regularization objective is integrated in the overall objective function as it is, i.e., it is not normalized. Dataset |D| |M| KB SemEval-2015-13 (Sem15) 4 757 BN (to be published) SemEval-2013-12 (Sem13) 13 1931 BN SemEval-2013-12 (Sem13) 13 1644 WN (Navigli et al., 2013) SemEval-2007-17 (Sem07) 3 159 WN (Pradhan et al., 2007) Senseval 3 (Sen3) 4 886 WN (Snyder and Palmer, 2004) AIDA-CoNLL-testb (AIDA) 216 4530 Wiki (Hoffart et al., 2011) KORE50 (KORE) 50 144 Wiki (Hoffart et al., 2012) Table 1: List of datasets used in experiments with information about their number of documents (D), annotated noun and/or NE mentions (M), and their respective target knowledge base (KB): BNBabelNet, WN-WordNet, Wiki-Wikipedia. 4 Experiments 4.1 Datasets We evaluated our approach on 7 different datasets, comprising 3 WSD datasets annotated with WordNet senses, 2 datasets annotated with Wikipedia articles for EL and 2 more recent datasets annotated with Babel synsets. Table 1 contains a list of all datasets. Besides these test datasets we used SemCor (Miller et al., 1993) as training data for WSD and the training part of the AIDA CoNLL dataset for EL. 4.2 Setup For the creation of semantic signatures we choose the same parameter set as defined by Moro et al. (2014). We run the random walk with a restart probability of 0.85 for a total of 1 million steps for each vertex in the semantic graph and keep vertices visited at least 100 times as semantic signatures. The L2-regularization objective for named entities is employed with λ = 0.001, which we found to perform best on the training part of the AIDACoNLL dataset. We trained the multi-class logistic regression model for WN-type classification on SemCor and for DBpedia-type classification on the training part of the AIDA-CoNLL dataset using LBFGS and L2-Regularization with λ = 0.01 until convergence. Our system optimizes the combined multiobjective function using Conjugate Gradient 600 System KB Description IMS (Zhong and Ng, 2010) WN supervised, SVM KPCS (Hoffart et al., 2011) Wiki greedy densest-subgraph on combined mention-entity, entity-entity measures KORE (Hoffart et al., 2012) Wiki extension of KPCS with keyphrase relatedness measure between entities MW (Milne and Witten, 2008) Wiki Normalized Google Distance Babelfy (Moro et al., 2014) BN greedy densest-subgraph on semantic signatures Table 2: Systems used for comparison during evaluation. (Hestenes and Stiefel, 1952) with up to a maximum of 1000 iterations per document. We utilized existing implementations from FACTORIE version 1.1 (McCallum et al., 2009) for logistic regression, NER tagging and Conjugate Gradient optimization. For NER tagging we used a pre-trained stacked linear-chain CRF (Lafferty et al., 2001). 4.3 Systems We compare our approach to state-of-the-art results on all datasets and a most frequent sense (MFS) baseline. The MFS baseline selects the candidate with the highest prior as described in section 2.4. Table 2 contains a list of all systems we compared against. We use Babelfy as our main baseline, because of its state-of-the-art performance on all datasets and because it also employed BabelNet as its sense inventory. Note that Babelfy achieved its results with different setups for WSD and EL, in contrast to our model, which uses the same setup for both tasks. 4.4 General Results We report the performance of all systems in terms of F1-score. To ensure fairness we restricted the candidate sets of the target mentions in each dataset to candidates of their respective reference KB. Note that our candidate mapping strategy ensures for all datasets a 97%−100% chance that the target synset is within a mention’s candidate set. This section presents results on the evaluation datasets divided by their respective target KBs: WordNet, Wikipedia and BabelNet. WordNet Table 3 shows the results on three datasets for the disambiguation of nouns to WordSystem Sens3 Sem07 Sem13 MFS 72.6 65.4 62.8 IMS 71.2 63.3 65.7 Babelfy 68.3 62.7 65.9 Our 68.8 66.0 72.8 Table 3: Results for nouns on WordNet annotated datasets. System AIDA KORE MFS 70.1 35.4 KPCS 82.2 55.6 KORE-LSH-G 81.8 64.6 MW 82.3 57.6 Babelfy 82.1 71.5 Our 85.1 67.4 Table 4: Results for NEs on Wikipedia annotated datasets. Net. Our approach exhibits state-of-the-art results outperforming all other systems on two of the three datasets. The model performs slightly worse on the Senseval 3 dataset because of one document in particular where the F1 score is very low compared to the MFS baseline. On the other three documents, however, it performs as good or even better. In general, results from the literature are always worse than the MFS baseline on this dataset. A strong improvement can be seen on the SemEval 2013 Task 12 dataset (Sem13), which is also the largest dataset. Our system achieves an improvement of nearly 7% F1 over the best other system, which translates to an error reduction of roughly 20% given that every word mention gets annotated. Besides the results presented in Table 3, we also evaluated the system on the SemEval 2007 Task 7 dataset for coarse grained WSD, where it achieved 85.5% F1 compared to the best previously reported result of 85.5% F1 from Ponzetto et al. (2010) and Babelfy with 84.6%. Wikipedia The performance on entity linking was evaluated against state-of-the-art systems on two different datasets. The results in Table 4 demonstrate that our model can compete with the best existing models, showing superior results especially on the large AIDA CoNLL6 test dataset comprising 216 news texts, where we achieve an error reduction of about 16%, resulting in a new state-of-the-art of 85.1% F1. On the other hand, our system is slightly worse on the KORE dataset compared to Babelfy (6 errors more in total), which might be due to the strong priors and 6the largest, freely available dataset for EL. 601 System Sem13 Sem15 MFS 66.7 71.1 Babelfy 69.2 – Best other – 64.8 Our 71.5 75.4 Table 5: Results for nouns and NEs on BabelNet annotated datasets. System Sem13 Sem15 AIDA MFS 66.7 71.1 70.1 Otyp 68.1 73.8 78.0 Ocoh + OL2 68.1 69.6 82.7 Ocoh + Otyp + OL2 71.5 75.4 85.1 Table 6: Detailed results for nouns and NEs on BabelNet annotated datasets and AIDA CoNLL. the small context. However, the dataset is rather small, containing only 50 sentences, and has been artificially tailored to the use of highly ambiguous entity mentions. For example, persons are most of the time only mentioned by their first names. It is an interesting dataset because it requires the system to employ a lot of background knowledge about mentioned entities. BabelNet Table 5 shows the results on the 2 existing BabelNet annotated datasets. To our knowledge, our system shows the best performance on both datasets in the literature. An interesting observation is that the F1 score on SemEval 2013 with BabelNet as target KB is lower compared to WordNet as target KB. The reason is that ambiguity rises for nominal mentions by including concepts from Wikipedia that do not exist in WordNet. For example, the Wikipedia concept “formal language” becomes a candidate for the surface string “language”. 4.5 Detailed Results We also experimented with different objective combinations, namely “type only” (Otyp), “coherence only” (Ocoh +OL2) and “all” (Ocoh +Otyp + OL2), to evaluate the impact of the different objectives. Table 6 shows results of employing individual configurations compared to the MFS baseline. Results for only using coherence or type exhibit varying performance on the datasets, but still consistently exceed the strong MFS baseline. Combining both objectives always yields better results compared to all other configurations. This finding is important because it proves that the objectives proposed in this work are indeed complementary, and thus demonstrates the significance of combining complementary approaches in one robust framework such as ours. An additional observation was that DBpediatype classification slightly overfitted on the AIDA CoNLL training part. When removing DBpediatype classification from the type objective, results increased marginally on some datasets except for the AIDA CoNLL dataset, where results decreased by roughly 3% F1. The improvements of using DBpedia-type classification are mainly due to the fact that the classifier is able to correctly classify names of places in tables consisting of sports scores not to the “Place” type but to the “Organization” type. Note that the AIDA CoNLL dataset (train and test) contains many of those tables. This shows that including supervised objectives into the system helps when data is available for the domain. 4.6 Generalization We evaluated the ability of our system to generalize to different domains based on the SemEval 2015 Task 13 dataset. It includes documents from the bio-medical, the math&computer and general domains. Our approach performs particularly well on the bio-medical domain with 86.3% F1 (MFS: 77.3%). Results on the math&computer domain (58.8% F1, MFS: 57.0%), however, reveal that performance still strongly depends on the document topic. This indicates that either the employed resources do not cover this domain as well as others, or that it is generally more difficult to disambiguate. Another potential explanation is that enforcing only pairwise coherence does not take the hidden concepts computer and maths into account, which connect all concepts, but are never actually mentioned. An interesting point for future research might be the introduction of an additional objective or the extension of the coherence objective to allow indirect connections between candidate meanings through shared topics or categories. Besides these very specific findings, the model’s ability to generalize is strongly supported by its good results across all datasets, covering a variety of different topics. 5 Related Work WSD Approaches to WSD can be distinguished by the kind of resource exploited. The two main resources for WSD are sense annotated datasets and knowledge bases. Typical supervised ap602 proaches like IMS (Zhong and Ng, 2010) train classifiers that learn from existing, annotated examples. They suffer from the sparsity of sense annotated datasets that is due to the data acquisition bottleneck (Pilehvar and Navigli, 2014). There have been approaches to overcome this issue through the automatic generation of such resources based on bootstrapping (Pham et al., 2005), sentences containing unambiguous relatives of senses (Martinez et al., 2008) or exploiting Wikipedia (Shen et al., 2013). On the other hand, knowledge-based approaches achieve good performances rivaling state-of-the-art supervised systems (Ponzetto and Navigli, 2010) by using existing structured knowledge (Lesk, 1986; Agirre et al., 2014), or take advantage of the structure of a given semantic network through connectivity or centrality measures (Tsatsaronis et al., 2007; Navigli and Lapata, 2010). Such systems benefit from the availability of numerous KBs for a variety of domains. We believe that both knowledge-based approaches and supervised methods have unique, complementary abilities that need to be combined for sophisticated disambiguation. EL Typical EL systems employ supervised machine learning algorithms to classify or rank candidate entities (Bunescu and Pasca, 2006; Milne and Witten, 2008; Zhang et al., 2010). Common features include popularity metrics based on Wikipedia’s graph structure or on name mention frequency (Dredze et al., 2010; Han and Zhao, 2009), similarity metrics exploring Wikipedia’s concept relations (Han and Zhao, 2009), and string similarity features. Mihalcea and Csomai (2007) disambiguate each mention independently given its sentence level context only. In contrast, Cucerzan (2007) and Kulkarni et al. (Kulkarni et al., 2009) recognize the interdependence between entities in a wider context. The most similar work to ours is that of Hoffart et al. (2011) which was the first that combined local and global context measures in one robust model. However, objectives and the disambiguation algorithm differ from our work. They represent the disambiguation task as a densest subgraph problem where the least connected entity is eliminated in each iteration. The discrete treatment of candidate entities can be problematic especially at the beginning of disambiguation where it is biased towards mentions with many candidates. Babelfy (Moro et al., 2014) is a knowledgebased approach for joint WSD and EL that also uses a greedy densest subgraph algorithm for disambiguation. It employs a single coherence model based on semantic signatures similar to our coherence objective. The system’s very good performance indicates that the semantic signatures provide a powerful resource for joint disambiguation. However, because we believe it is not sufficient to only enforce semantic agreement among nouns and entities, our approach includes an objective that also focuses on the local context of mentions, making it more robust. 6 Conclusions & Future Work We have presented a novel approach for the joint disambiguation of nouns and named entities based on an extensible framework. Our system employs continuous optimization on a multiobjective function during disambiguation. The integration of complementary objectives into our formalism demonstrates that robust disambiguation can be achieved by considering both the local and the global context of a mention. Our model outperforms previous state-of-the-art systems for nominal WSD and for EL. It is the first system that achieves such results on various WSD and EL datasets using a single setup. In future work, new objectives should be integrated into the framework and existing objectives could be enhanced. For example, it would be interesting to express semantic relatedness continuously rather than in a binary setting for the coherence objective. Additionally, using the entire model during training could ensure better compatibility between the different objectives. At the moment, the model itself is composed of different pre-trained models that are only combined during disambiguation. Acknowledgment This research was partially supported by the German Federal Ministry of Education and Research (BMBF) through the projects ALL SIDES (01IW14002), BBDC (01IS14013E), and by the German Federal Ministry of Economics and Energy (BMWi) through the project SD4M (01MD15007B), and by Google through a Focused Research Award granted in July 2013. 603 References [Agirre et al.2014] Eneko Agirre, Oier Lopez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57–84. [Bunescu and Pasca2006] Razvan C Bunescu and Marius Pasca. 2006. Using encyclopedic knowledge for named entity disambiguation. In EACL, volume 6, pages 9–16. [Cucerzan2007] Silviu Cucerzan. 2007. Large-scale named entity disambiguation based on wikipedia data. In EMNLP-CoNLL, volume 7, pages 708–716. [Daiber et al.2013] Joachim Daiber, Max Jakob, Chris Hokamp, and Pablo N Mendes. 2013. Improving efficiency and accuracy in multilingual entity extraction. In Proceedings of the 9th International Conference on Semantic Systems, pages 121–124. ACM. [Dredze et al.2010] Mark Dredze, Paul McNamee, Delip Rao, Adam Gerber, and Tim Finin. 2010. Entity disambiguation for knowledge base population. In Proc. of the 23rd International Conference on Computational Linguistics, pages 277–285. Association for Computational Linguistics. [Fellbaum1998] Christiane Fellbaum. 1998. WordNet. Wiley Online Library. [Han and Zhao2009] Xianpei Han and Jun Zhao. 2009. Named entity disambiguation by leveraging wikipedia semantic knowledge. In Proc. of the 18th ACM conference on Information and knowledge management, pages 215–224. ACM. [Hestenes and Stiefel1952] Magnus Rudolph Hestenes and Eduard Stiefel. 1952. Methods of conjugate gradients for solving linear systems, volume 49. National Bureau of Standards Washington, DC. [Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Yosef, Ilaria Bordino, Hagen F¨urstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proc. of the Conference on Empirical Methods in Natural Language Processing, pages 782–792. Association for Computational Linguistics. [Hoffart et al.2012] Johannes Hoffart, Stephan Seufert, Dat Ba Nguyen, Martin Theobald, and Gerhard Weikum. 2012. Kore: keyphrase overlap relatedness for entity disambiguation. In Proc. of the 21st ACM international conference on Information and knowledge management, pages 545–554. ACM. [Izquierdo-Bevi´a et al.2006] Rub´en Izquierdo-Bevi´a, Lorenza Moreno-Monteagudo, Borja Navarro, and Armando Su´arez. 2006. Spanish all-words semantic class disambiguation using cast3lb corpus. In MICAI 2006: Advances in Artificial Intelligence, pages 879–888. Springer. [Kulkarni et al.2009] Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of wikipedia entities in web text. In Proc. of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 457–466. ACM. [Lafferty et al.2001] John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proceedings of the Eighteenth International Conference on Machine Learning, ICML ’01, pages 282–289, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. [Lesk1986] Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: how to tell a pine cone from an ice cream cone. In Proc. of the 5th annual international conference on Systems documentation, pages 24–26. ACM. [Martinez et al.2008] David Martinez, Oier Lopez De Lacalle, and Eneko Agirre. 2008. On the use of automatically acquired examples for all-nouns word sense disambiguation. J. Artif. Intell. Res.(JAIR), 33:79–107. [McCallum et al.2009] Andrew McCallum, Karl Schultz, and Sameer Singh. 2009. FACTORIE: Probabilistic programming via imperatively defined factor graphs. In Neural Information Processing Systems (NIPS). [Mihalcea and Csomai2007] Rada Mihalcea and Andras Csomai. 2007. Wikify!: linking documents to encyclopedic knowledge. In Proc. of the sixteenth ACM conference on Conference on information and knowledge management, pages 233–242. ACM. [Mikolov et al.2013] Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. [Miller et al.1993] George A Miller, Claudia Leacock, Randee Tengi, and Ross T Bunker. 1993. A semantic concordance. In Proc. of the workshop on Human Language Technology, pages 303–308. Association for Computational Linguistics. [Milne and Witten2008] David Milne and Ian H Witten. 2008. Learning to link with wikipedia. In Proc. of the 17th ACM conference on Information and knowledge management, pages 509–518. ACM. [Moro et al.2014] Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: A unified approach. Transactions of the Association for Computational Linguistics, 2. 604 [Navigli and Lapata2010] Roberto Navigli and Mirella Lapata. 2010. An experimental study of graph connectivity for unsupervised word sense disambiguation. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 32(4):678–692. [Navigli and Ponzetto2012] Roberto Navigli and Simone Paolo Ponzetto. 2012. Babelnet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217–250. [Navigli et al.2013] Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. Semeval-2013 task 12: Multilingual word sense disambiguation. In Second Joint Conference on Lexical and Computational Semantics (SEM), volume 2, pages 222–231. [Navigli2009] Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Computing Surveys (CSUR), 41(2):10. [Pham et al.2005] Thanh Phong Pham, Hwee Tou Ng, and Wee Sun Lee. 2005. Word sense disambiguation with semi-supervised learning. In Proc. of the national conference on artificial intelligence, volume 20, page 1093. Menlo Park, CA; Cambridge, MA; London; AAAI Press; MIT Press; 1999. [Pilehvar and Navigli2014] Mohammad Taher Pilehvar and Roberto Navigli. 2014. A large-scale pseudoword-based evaluation framework for stateof-the-art word sense disambiguation. Computational Linguistics, 40(4):837–881. [Ponzetto and Navigli2010] Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich word sense disambiguation rivaling supervised systems. In Proc. of the 48th annual meeting of the association for computational linguistics, pages 1522–1531. Association for Computational Linguistics. [Pradhan et al.2007] Sameer S Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. Semeval-2007 task 17: English lexical sample, srl and all words. In Proc. of the 4th International Workshop on Semantic Evaluations, pages 87–92. Association for Computational Linguistics. [Shen et al.2013] Hui Shen, Razvan Bunescu, and Rada Mihalcea. 2013. Coarse to fine grained sense disambiguation in wikipedia. Proc. of SEM, pages 22– 31. [Snyder and Palmer2004] Benjamin Snyder and Martha Palmer. 2004. The english all-words task. In Senseval-3: Third International Workshop on the Evaluation of Systems for the Semantic Analysis of Text, pages 41–43. [Tsatsaronis et al.2007] George Tsatsaronis, Michalis Vazirgiannis, and Ion Androutsopoulos. 2007. Word sense disambiguation with spreading activation networks generated from thesauri. In IJCAI, volume 7, pages 1725–1730. [Zhang et al.2010] Wei Zhang, Jian Su, Chew Lim Tan, and Wen Ting Wang. 2010. Entity linking leveraging: automatically generated annotation. In Proc. of the 23rd International Conference on Computational Linguistics, pages 1290–1298. Association for Computational Linguistics. [Zhong and Ng2010] Zhi Zhong and Hwee Tou Ng. 2010. It makes sense: A wide-coverage word sense disambiguation system for free text. In Proc. of the ACL 2010 System Demonstrations, pages 78–83. Association for Computational Linguistics. 605
2015
58
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 606–615, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Building a Scientific Concept Hierarchy Database (SCHBASE) Eytan Adar University of Michigan Ann Arbor, MI 48104 [email protected] Srayan Datta University of Michigan Ann Arbor, MI 48104 [email protected] Abstract Extracted keyphrases can enhance numerous applications ranging from search to tracking the evolution of scientific discourse. We present SCHBASE, a hierarchical database of keyphrases extracted from large collections of scientific literature. SCHBASE relies on a tendency of scientists to generate new abbreviations that “extend” existing forms as a form of signaling novelty. We demonstrate how these keyphrases/concepts can be extracted, and their viability as a database in relation to existing collections. We further show how keyphrases can be placed into a semantically-meaningful “phylogenetic” structure and describe key features of this structure. The complete SCHBASE dataset is available at: http://cond.org/schbase.html. 1 Introduction Due to the immense practical value to Information Retrieval and other text mining applications, keyphrase extraction has become an extremely popular topic of research. Extracted keyphrases, specifically those derived from scientific literature, support search tasks (Anick, 2003), classification and tagging (Medelyan et al., 2009), information extraction (Wu and Weld, 2008), and higherlevel analysis such as the tracking of influence and dynamics of information propagation (Shi et al., 2010; Ohniwa et al., 2010). In our own work we use the extracted hierarchies to predict scientific emergence based on how rapidly new variants emerge. Keyphrases themselves capture a diverse set of scientific language (e.g., methods, techniques, materials, phenomena, processes, diseases, devices). Keyphrases, and their uses, have been studied extensively (Gil-Leiva and Alonso-Arroyo, 2007). However, automated keyphrase extraction work has often focused on large-scale statistical techniques and ignored the scientific communication literature. This literature points to the complex ways in which keyphrases are created in light of competing demands: expressiveness, findability, succinct writing, signaling novelty, signaling community membership, and so on (Hartley and Kostoff, 2003; Ibrahim, 1989; Grange and Bloom, 2000; Gil-Leiva and AlonsoArroyo, 2007). Furthermore, the tendency to extract keyphrases through statistical mechanisms often leads to flat keyphrase spaces that make analysis of evolution and emergence difficult. Our contention, and the main motivation behind our work, is that we can do better by leveraging explicit mechanisms adopted by authors in keyphrase generation. Specifically, we focus on a tendency to expand keyphrases by adding terms, coupled with a pressure to abbreviate to retain succinctness. As we argue below, scientific communication has evolved the use of abbreviations to deal with various constraints. Abbreviations, and acronyms specifically, are relatively new in many scientific domains (Grange and Bloom, 2000; Fandrych, 2008) but are now ubiquitous (Ibrahim, 1989; Cheng, 2010). Keyphrase selection is often motivated by increasing article findability within a domain (thereby increasing citation). This strategy leads to keyphrase reuse. A competing pressure, however, is to signal novelty in an author’s work which is often done by creating new terminology (e.g., creating a “brand” around a system or idea). For 606 example, a machine learning expert working on a new type of Support Vector Machine will want their article found when someone searches for “Support Vector Machine,” but will also want to add their own unique brand. In response, they will often augment the original keyphrase (e.g., “LeastSquares Support Vector Machine”) rather than inventing a completely new one. Unfortunately, continuous expansion will soon render a paper unreadable (e.g., one of many extensions to Polymerase Chain Reaction is Standard Curve Quantitative Competitive Reverse Transcription Polymerase Chain Reaction). Thus emerges a second strategy: abbreviation. Our assertion is that abbreviations are a key mechanism for resolving competing demands. Authors can simultaneously expand keyphrases, thus maintaining both findability and novelty, while at the same time addressing the need to be succinct and non-repetitive. Of interest to us is the phenomena that if a new keyphrase expands an existing keyphrase that has an established abbreviation, the new keyphrase will also be abbreviated (e.g., LS-SVM and SVM). This tendency allows us to construct hierarchies of evolved keyphrases (rather than assuming a flat keyphrase space) which can be leveraged to identify emergence, keyphrase “mash-ups,” and perform other high level analysis. As we demonstrate below, edges represent the rough semantic of EXTENDS or ISSUBTYPEOF. So if keyphrase A is connected to B, we can say A is a subtype of B (e.g., A is “Least-Squares Support Vector Machine” and B is “Support Vector Machine”). In this paper we introduce SCHBASE, a hierarchical database of keyphrases. We demonstrate how we can simply, but effectively, extract keyphrases by mining abbreviations from scientific literature and composing those keyphrases into semantically-meaningful hierarchies. We further show that abbreviations are a viable mechanism for building a domain-specific keyphrase database by comparing our extracted keyphrases to a number of author-defined and automaticallycreated keyphrase corpora. Finally, we illustrate how authors build upon each others’ terminology over time to create new keyphrases.1 1Full database available at: http://cond.org/schbase.html 2 Related Work Initial work in keyphrase extraction utilized heuristics that were based on the understood structure of scientific documents (Edmundson, 1969). As more data became available, it was possible to move away from heuristic cues and to leverage statistical techniques (Paice and Jones, 1993; Turney, 2000; Frank et al., 1999) that could identify keyphrases within, and between, documents. The guiding model in this approach is that phrases that appear as statistical “anomalies” (by some measure) are effective for summarizing a document or corpus. This style of keyphrase extraction represents much of the current state-of-theart (Kim et al., 2010). Specific extensions in this space involve the use of network structures (Mihalcea and Tarau, 2004; Litvak and Last, 2008; Das Gollapalli and Caragea, 2014), part-of-speech features (Barker and Cornacchia, 2000; Hulth, 2003), or more sophisticated metrics (Tomokiyo and Hurst, 2003). However, as we note above, these statistical approaches largely ignore the underlying tensions in scientific communication that lead to the creation of new keyphrases and how they are signaled to others. The result is that these techniques often find statistically “anomalous” phrases which often are not valid scientific concepts (but are simply uncommon phrasing), are unstructured and disconnected, and inflexible to size variance (as in the case of fixed length n-grams), and fail to capture extremely rare terminology. The idea that abbreviations may be useful for keyphrase extraction has been partially realized. Nguyen et al., (2007) found that they could produce better keyphrases by extending existing models (Frank et al., 1999) to include an acronym indicator as a feature. That is, if a candidate phrase had an associated parenthetical acronym associated with it in the text a binary feature would be set. This approach has been implemented by others (Bordea and Buitelaar, 2010). We propose to expand on this idea by implementing a simple, but effective, solution by performing abbreviation extraction to build a hierarchical keyphrase database – a form of open-information extraction (Etzioni et al., 2008) on large scientific corpora. 3 Keyphrases and Hierarchies Our high level strategy for finding an initial set of keyphrases is to mine a corpus for abbrevia607 tion expansions. This is a simple strategy, but as we show below, highly effective. Though the idea that abbreviations and keyphrases are linked fits within our understanding of scientific writing, we confirmed our intuition through a small experiment. Specifically, we looked at the 85 unique keyphrases (in this case, article titles) listed in the Wikipedia entry for List of Machine Learning Concepts (Wikipedia, 2014). These ranged from well known terms (e.g., Support Vector Machines and Autoencoders) to less known (e.g., Information fuzzy networks). In all 85 cases we were able to find an abbreviation on the Web (using Google) alongside the expansion (e.g., searching for the phrases “Support Vector Machines (SVMs)” or “Information Fuzzy Networks (IFN)”). Though there may be bias in the use of abbreviations in the Machine Learning literature, our experience has been that this holds in other domains as well. When a scientific keyphrase is used often enough, someone, somewhere, will have abbreviated it. 3.1 Abbreviation Extraction To find all abbreviation expansions we use the unsupervised SaRAD algorithm (Adar, 2004). This algorithm is simple to implement, does not require extremely large amounts of data, works for both acronyms and more general abbreviations, and has been demonstrated as effective in various contexts (Adar, 2004; Schwartz and Hearst, 2003). However, our solution does not depend on a specific implementation, only that we are able to accurately identify abbreviation expansions. Adar (2004) presents the full details for the algorithm, but for completeness we present the high level details. The algorithm progresses by identifying abbreviations inside of parentheses (defined as single words with at least one capital letter). The algorithm then extracts a “window” of text preceding the parenthesis, up to n words long (where n is the character length of the abbreviation plus padding). This window does not cross sentence boundaries. Within the window all possible “explanations” of the abbreviation are derived. An explanation consists of a continuous subsequence of words that contain all the characters of the original abbreviation in order. For example, the window “determine the geographical distribution of ribonucleic acid” preceding the abbreviation “RNA” includes the explanations: “determine the geographical,” “graphical distribution of ribonucleic acid” and “ribonucleic acid” (matching characters in italics). In the example above there are ten explanations (five unique). Each explanation is scored heuristically: 1 point for each abbreviation character at the start of a word; 1 point subtracted for every word between the explanation and the parenthesis; 1 point bonus if the explanation is adjacent to the parenthesis; 1 point subtracted for each extra word beyond the abbreviation length. For the explanations above, the scores are −4, 0, and 3 respectively. The highest scoring match (we require a minimum of 1 point) is returned as the mostly likely expansion. In practice, pairs of extracted abbreviations/expansions are pulled from a large textual corpus. This both allows us to identify variants of expansions (e.g., different pluralization, spelling, hyphenation, etc.) as well as finding more plausible expansions (those that are repeated multiple times in a corpus). Thus, each expansion/abbreviation pair has an associated count which can be used to threshold and filter for increased quality. To discard units of measurement, single letter abbreviations and single word expansions are removed. We return to this decision later, but our experience is also that single word keyphrases are rare. Additionally, expansions containing brackets are not considered as they usually represent mathematical formulae. 3.1.1 The ABBREVCORPUS In our experiments we utilize the ACM Digital Library (ACMDL) as our main corpus. Though the ACMDL is more limited than other collections, it has a number of desirable properties: spanning nearly the entire history (1954-2011) of a domain (Computer Science) with full-text and clean metadata. The corpus itself contains both journal and conference articles (77k and 197k, respectively). In addition to the filtering rules described above, we manually constructed a set of filter terms to remove publication venues, agencies, and other institutions: ‘university’, ‘conference’, ‘symposium’, ‘journal’, ‘foundation’, ‘consortium’, ‘agency’, ‘institute’ and ‘school’ are discarded. We further normalize our keyphrases by lowercasing, removing hyphens, and using the Snowball stemmer (Porter, 2001) to merge plural variants. After stemming and normalizing, we found a total of 155,957 unique abbreviation expansions. Among these, 48,890 expansions occur more than once, 25,107 expansions thrice or more 608 and 16,916 expansions four or more times. We refer to this collection as the ABBREVCORPUS. For each keyphrase we search within the fulltext corpus to identify set of documents containing the keyphrase. This allowed us to find both the earliest mention of the keyphrase (the expansion, not the abbreviation) as well as overall popularity of keyphrases. We do not argue that abbreviations are the norm in the introduction of new keyphrases and may, in fact, only happen much later when the domain is familiar enough with the phrase. To find the expansions in the full-text we utilize a modified suffix-tree that greedily finds the longest-matching phrase and avoids “doublecounting”. For example, if the text contains the phrase, “. . . we utilize a Least-Squares Support Vector Machine for . . . ” it will match against Least-Squares Support Vector Machine but not Least Squares, Support Vector Machines, or Support Vector (also keyphrases in our collection). The distribution of keyphrase frequency is a power-law (many keyphrases appearing once with a long tail) with exponent (α) of 2.17 (fit using Clauset et al., (2009)). 3.2 Building Keyphrase Hierarchies We employ a very simple method of text containment to build keyphrase hierarchies from ABBREVCORPUS. If a keyphrase A is a substring of keyphrase B, A is said to be contained by B (B →A). If a third keyphrase, C, contains B and is contained by A, the containment link between A and B is dropped and two new ones (A →C and C →B) are added. For example for the keyphrases, circuit switching, optical circuit switching and dynamic optical circuit switching, there are links from optical circuit switching to circuit switching, and dynamic optical circuit switching to optical circuit switching, but there is no link from dynamic optical circuit switching to circuit switching. The hierarchies formed in this manner are mostly trees, but in rare cases a keyphrase can have links to multiple branches. Example hierarchies are displayed in Figure 1. For efficiency we sort all keyphrases by length (from largest to shortest) and iterate over each one, testing for containment in all previously “seen” keyphrases. This is computationally intensive, O(n2), but can be parallelized. A potential issue with string containment is that negating prefixes can also appear (e.g., nonmonotonic reasoning and monotonic reasoning). Our algorithm uses a dictionary of negations and can annotate the results. However, in practice we find that only .6% of our data has a leading negating-prefix (“internal” negating prefixes can also be caught in this way, but are similarly rare). It is an application-specific question if we want to consider such pairs as “siblings” or “parent-child” (with both supported). 4 Overlap with Keyphrase Corpora To test our newly-constructed keyphrase database we generate a mixture of human- and machinebuilt datasets to compare. Our goal is to characterize both the intersection (keyphrases appearing in our corpus as well as the external datasets) as well as those keyphrases uniquely captured by each dataset. 4.1 ACM Author keyphrases (ACMCORPUS) The metadata for articles in ACM corpus contain author-provided keyphrases. In the corpus described above, we found 145,373 unique authorprovided keyphrases after stemming and normalization. We discard 16,418 single-word keywords and those that do not appear in the full-text of any document. We retain 116,246 keyphrases which we refer to as the ACMCORPUS. ACMCORPUS WIKICORPUS MSRACORPUS MESHCORPUS MESHCORPUS WIKICORPUS MSRACORPUS ACMCORPUS Figure 2: Keyphrase counts for the ACMCORPUS (powerlaw α = 2.36), WIKICORPUS (2.49), MSRACORPUS (2.55) and MESHCORPUS (2.7) within the ACM full-text. 4.2 Microsoft Academic (MSRACORPUS) Our second keyphrase dataset comes from the Microsoft Academic (MSRA) search corpus (Microsoft, 2015). While particularly focused on 609 fault tolerance (1969) fault tolerance index (2006) software fault tolerance (1973) algorithm based fault tolerance (1984) partial fault tolerance (1975) byzantine fault tolerance (1991) practical byzantine fault tolerance (2000) geographic information (1973) volunteered geographic information (2008) geographic information network (2011) geographic information science (1996) geographic information science and technology (2010) geographic information services (2000) geographic information system (1975) geographic information retrieval (1976) geographic information systems and science (2003) Figure 1: Keyphrase hierarchy for Fault Tolerance (top) and Geographic Information (Bottom). Colors encode earliest appearance (brighter green is earlier) Computer Science, this collection contains articles and keyphrases from over a dozen domains2. MSRA provides a list of keyphrases with unique IDs and different stemming variations of each keyphrase. There are a total of 46,978 (without counting stemming variations) of which 30,477 keyphrases occur in ACM full-text corpus after stemming and normalization (64% coverage). 4.3 MeSH (MESHCORPUS) Medical Subject Headings (MeSH) (Lipscomb, 2000) is set of subject headings or descriptors in the life sciences domain. For the purpose of our work, we use the 27,149 keyphrases from the 2014 MeSH dataset. Similar to the other keyphrase lists we only use stemmed and normalized multi-word keywords that occur in in the ACM full-text corpus, which is 4,363 in case of MeSH. 4.4 Wikipedia (WIKICORPUS) Scientific article headings in Wikipedia can often be used as a proxy for keyphrases. To collect relevant titles, we find Wikipedia articles that exactly match (in title name) existing MeSH and MSRA keyphrases. For these “seed” articles, we compile their categories and mark all the articles in these categories as potentially “relevant.” However, as this also captures scientist names (e.g., a 2We know these keyphrases are algorithmically derived, but the details are not disclosed. researcher’s page may be placed under the “Computer Science” category), research institutes and other non-keyphrase matches, we use the page’s infobox as a further filter. Pages containing “person,” “place,” infoboxes, in “book,” “video game,” “TV show” or other related “media” category, and those with geographical coordinates are removed. After applying these filters, we obtain 110,102 unique article titles (after stemming) which we treat as keyphrases. Of these, 39,974 occur in the ACM full-text corpus. 4.5 Results The total overlap for ACMCORPUS, MESHCORPUS, MSRACORPUS and WIKICORPUS are 14.12%, 12.28%, 32.33% and 17.41% respectively. While these numbers seem low, it is worth noting that many of these terms only appear once in the ACM full-text corpus (see Figure 2). Figure 3 illustrates the relationship between the number of times a keyphrase appears in the full-text and the probability that it will appear in ABBREVCORPUS. In all cases, the more often a keyphrase appears in the corpus, the more likely it is to have an abbreviation. If we qualitatively examine popular phrases that do not appear in ABBREVCORPUS we find mathematical forms (e.g., of-the-form, well-defined or a priori), and nouns/entities that are largely unrelated to scientific keyphrases (e.g., New Jersey, Government Agency, and Private Sector). More importantly, 610 the majority of phrases that are never abbreviated are simply not Computer Science keyphrases (we return to this in Section 4.6). We were somewhat surprised by the poor overlap of the ACMCORPUS, even for terms that were very common in the full-text. We found that the cause was a large set of “bad” keyphrases. Specifically, 69.3k (69.5%) of author-defined keyphrases (occurring in ACMCORPUS but not in ABBREVCORPUS) are used as a keyword in only one paper. However, they appear more than once in the full-text – often many times. For example, one author (and only one) used if and only if as a keyphrase, which matched a great many articles. The result is that there is little correlation between the number of times a keyphrase appears in the full-text and how many times it used explicitly as a keyphrase in the document metadata. Because these will never be found as an abbreviation, they “pull” the mean probability down. Instead of counting the number of times a keyphrase occurs in the full-text we generate a frequency count based on the number of times authors explicitly use it in the metadata. This new curve, labeled as ACMCORPUS (KEY) in Figure 3 displays a very different tendency, with a rapid upward slope that peaks at 100% for frequentlyoccurring keyphrases. Notably, only 16k (16%) keyphrases appear once in full-text but are never abbreviated (far fewer than the 69.5% above). It is worth briefly considering those terms that appear in ABBREVCORPUS and not in the other keyphrases lists. We find roughly 17.6k, 24.7k, 19.4k, and 21.4k terms that appear in ABBREVCORPUS (with a threshold of 2 to eliminate “noisy” expansions), but not in ACMCORPUS, MESHCORPUS, MSRACORPUS, and WIKICORPUS respectively. As MeSH keyphrases tend to be focused on the biological keyphrases this is perhaps unsurprising but the high numbers for the author-provided ACM keyphrases is unexpected. We find that some of the keyphrases that are in ABBREVCORPUS but not in ACMCORPUS are highly specific (e.g., Multi-object Evolutionary Algorithm Based on Decomposition or Stochastic Variable Graph Model). However, many are also extremely generic terms that one would expect to find in a computer science corpus: Run-Time Error Detection, Parallel Execution Tree, and Little Endian. Our hypothesis is that these are often not the focus of a paper and are unlikely to be selected Probabilty of Appearance in ABBRCORPUS ACMCORPUS (TEXT) ACMCORPUS (KEY) WIKICORPUS MSRACORPUS MESHCORPUS MESHCORPUS WIKICORPUS MSRACORPUS ACMCORPUS (TEXT) ACMCORPUS (KEY) Figure 3: The probability of inclusion of keyphrases in ABBREVCORPUS based on frequency of appearance in full text or, in the case if ACMCORPUS (KEY), frequency of use as a keyword. At frequency x, the y value represents probability of appearence in ABBREVCORPUS if we only consider terms that appear at least x times in the other corpus. by the author. We believe this provides further evidence of the viability of the abbreviation approach to generating good keyphrase lists. 4.6 Domain keyphrases When looking at keyphrases that appear in MESHCORPUS but not in the ABBREVCORPUS we find that many phrases do, in fact, appear in the full text but are never abbreviated. For example, Color Perception and Blood Cell both appear in ACM articles but are not abbreviated. Our hypothesis— which is motivated by the tendency of scientists to abbreviate terms that are deeply familiar to their community (Grange and Bloom, 2000)—is that terms that are possibly distant from the core domain focus tend not to be abbreviated. This is supported by the fact that these terms are abbreviated in other collections (e.g., one can find CP as an abbreviation for Color Perception in psychology and cognition work and BC, for Blood Cell, in medical and biological journals). Additional evidence is apparent in Figure 3 which shows that ACMCORPUS keyphrases are more likely to be abbreviated (with far fewer repeats necessary). MSRACORPUS, which contains many Computer Science articles, also has higher probabilities (though not nearly matching the ACM). To test this systematically, we calculated semantic similarity between each keyphrase in 611 the WikiCorpus dataset to “computer science.” Specifically, we utilize Explicit Semantic Analysis (Gabrilovich and Markovitch, 2009) to calculate similarity. In this method, every segment of text is represented in a very high dimensional space in terms of keyphrases (based on Wikipedia categories). The similarity score for each term is between 0 (unrelated) and 1 (very similar). Figure 4 demonstrates that with increasing similarity, the likelihood of abbreviation increases. From this, one may infer that to generate a domain-specific database that excludes unrelated keyphrases, the abbreviation-derived corpus is highly appropriate. Conversely, to get coverage of keyphrases from all scientific domains it is insufficient to mine for abbreviations in one specific domain’s text. Even though a keyphrase may appear in the full-text it will simply never be abbreviated. Figure 4: Probability of a keyphrase appearing in ABBREVCORPUS (y-axis) based on semantic similarity of the keyphrase to “Computer Science” (xaxis, binned exponentially for readability). 4.7 Keyphrase Hierarchies Our hierarchy generation process (see Section 3.2) generated 1716 hierarchies accounting for 8661 unique keyphrases. Most of the hierarchies (1002 or 58%) only contained two nodes (a root and one child). The degree distribution, aggregated across all hierarchies, is again power-law (α = 2.895). Hierarchy sizes are power-law distributed (α = 2.807) and an average “diameter” (max height) of 1.135. The hierarchies contain a giant component with 2302 nodes and 2436 edges. While most of our hierarchies are trees, keyphrases can connect to two independent branches. For example, Least-Squares Support Vector Machines (LS-SVMs) appears in both the Least Squares and Support Vector hierarchies. In total, 649 keyphrases appear in multiple hierarchies, the majority appearing 2. Only 17 keyphrases appear in 3 hierarchies. For example, the particularly long Single Instruction Multiple Thread Evolution Strategy Pattern Search appears in the Evolution(ary) Strategy, Pattern Search, and Single-Instruction-Multiple-Thread hierarchies. These collisions are interesting in that they reflect a mash-ups of different concepts, and by extension, different sub-disciplines or techniques. In some situations, where there is an overlap in many sub-keyphrases, this may indicate that two root keyphrases are in fact equivalent or highly related (e.g., likelihood ratio and log likelihood). We do not currently handle such ambiguity in SCHBASE. To test the semantic interpretation of edges as EXTENDS/ISSUBTYPEOF we randomly sampled 200 edges and manually checked these. We found that in 92% (184) this interpretation was correct. The remaining 16 were largely an artifact of normalization errors rather than a wrong “type” (e.g., “session identifier” and “session id” where clearly a more accurate interpretation is ISEXPANSIONOF). We believe it is fair to say that the hierarchies we construct are the “skeleton” of a full EXTENDS hierarchy but one that is nonetheless fairly encompassing. Our qualitative analysis is that most keyphrases that share a type also share a root keyphrase (e.g., “classifier”). It is interesting to consider if edges which are derived by “containment” reflect a temporal pattern. That is, if keyphrase A EXTENDS B, does the first mention of A in the literature happen after B? We find that this is almost always the case. Among the 7136 edges generated by our algorithm only 165 (2.3%) are “reversed.” Qualitatively, we find that these instances appear either due to missing data (the parent keyphrase first appeared outside the ACM) or publication ordering (in some cases the difference in first-appearance is only a year). In most situations the date is only 1-2 years apart. This high degree of consistency lends further support to the tendency of scientists to expand upon keyphrases over time. Figure 5 depicts the mean change in length of “children” in keyphrase hierarchies. The numbers depicted are relative change. Thus, at year “0”, the year the root keyphrase is introduced, there is no relative increase. Within 1 year, new children of that root are 50% larger in character length and after that children continue to “grow” as authors add additional keyphrases. A particularly obvious 612 example of this is the branch for Petri Net (PN) which was extended as Queueing Petri Net (QPN) and then Hierarchically Combined Queueing Petri Nets (HCQPN) and finally Extended Hierarchically Combined Queueing Petri Nets (EHCQPN). Notably, this may have implications to other extractors that assume fixed-sized entities over the history of the collection. Figure 5: Average increase in character length of sub-keyphrases over time 5 Discussion and Future Work Our decision to eliminate single-word keyphrases from consideration is an explicit one. Of the 145k keyphrases in the original ACMCORPUS (pre-filtering), 16,418 (11.29%) were single-word keyphrases. Our experience with the ACM authordefined keyphrases is that such terms are too generic to be useful as “scientific” keyphrases. For example, In all the ACM proceedings, the top5 most common single-word keyphrases are security, visualization, evaluation, design, and privacy. Even in specific sub-domains, such as recommender systems (Proceedings of Recsys), the most popular single-word keyphrases are personalization, recommendation, evaluation, and trust. Contrast these to the most popular multi-word terms: recommender system(s), collaborative filtering, matrix factorization, and social network(s). Notably, in the MSRA corpus, which is algorithmically filtered, only .46% (226 keyphrases) were single word. MeSH, in contrast, has a full 37% of keyphrases as single-term. In most situations these reflect chemical names (e.g., 382 single-word enzymes) or biological structures. In such a domain, and if these keyphrases are desirable, it may be advisable to retain single-word abbreviations. While it may seem surprising, even single words are often abbreviated (e.g., Transaldolase is “T” and Ultrafiltration is “U” or “U/F”). A second key observation is that while the ACM full-text corpus is large, it is by no means “big.” We selected to use it because it controlled and “clean.” However, we have also run our algorithms on the MSRA Corpus (which contains only abstracts) and CiteSeer (which contains fulltext). Because the corpora contain more text we find significantly higher overlap with the different keyphrase corpora. However, this comes at the cost of not being able to isolate the domainspecific keyphrases. To put it differently, the broader full-text collections enable to us generate a more fleshed out keyphrase hierarchies that tracks keyphrases across all domains but which may not be appropriate for certain workloads. Finally, it is worth considering the possibility of building hierarchies (and connecting them) by relations other than “containment.” We have begun to utilize metrics such as co-occurrence of keyphrases (e.g., PMI) as well as higher level citation and co-citation structure in the corpora. Thus, we are able to connect terms that are highly related but are textually dissimilar. When experimenting with PMI, for example, we have found a diverse set of edge types including ISUSEDFOR (e.g., “ngram language model” and “machine translation”) or ISUSEDIN (e.g., “Expectation Maximization” and “Baum-Welch” or “euclidean algorithm” and “k-means”). By necessity, edges generated by this technique require an additional classification. 6 Summary We have introduced SCHBASE, a simple, robust, and highly effective system and database of scientific concepts/keyphrases. By leveraging the incentive structure of scientists to expand existing ideas while simultaneously signaling novelty we are able to construct semantically-meaningful hierarchies of related keyphrases. The further tendency by authors to succinctly describe new keyphrases results in a general habit of utilizing abbreviations. We have demonstrated a mechanism to identify these keyphrases by extracting abbreviation expansions and have shown that these keyphrases cover the bulk of “useful” keyphrases within the domain of the corpus. We believe that SCHBASE will enable a number of applications ranging from search, categorization, and analysis of scientific communication patterns. 613 Acknowledgments The authors thank the Microsoft Academic team, Jaime Teevan, Susan Dumais, and Carl Lagoze for providing us with data and advice. This work is supported by the Intelligence Advanced Research Projects Activity (IARPA) via Department of Interior National Business Center contract number D11PC20155. The U.S. government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. References Eytan Adar. 2004. SaRAD: a simple and robust abbreviation dictionary. Bioinformatics, 20(4):527–533. Peter Anick. 2003. Using terminological feedback for web search refinement: A log-based study. In Proceedings of the 26th Annual International ACM SIGIR Conference on Research and Development in Informaion Retrieval, SIGIR ’03, pages 88–95, New York, NY, USA. ACM. Ken Barker and Nadia Cornacchia. 2000. Using noun phrase heads to extract document keyphrases. In Howard J. Hamilton, editor, Advances in Artificial Intelligence, volume 1822 of Lecture Notes in Computer Science, pages 40–52. Springer Berlin Heidelberg. Georgeta Bordea and Paul Buitelaar. 2010. Deriunlp: A context based approach to automatic keyphrase extraction. In Proceedings of the 5th international workshop on semantic evaluation, pages 146–149. Association for Computational Linguistics. Tsung O. Cheng. 2010. What’s in a name? another unexplained acronym! International Journal of Cardiology, 144(2):291 – 292. Aaron Clauset, Cosma Rohilla Shalizi, and Mark EJ Newman. 2009. Power-law distributions in empirical data. SIAM Review, 51(4):661–703. Sujatha Das Gollapalli and Cornelia Caragea. 2014. Extracting keyphrases from research papers using citation networks. In Twenty-Eighth AAAI Conference on Artificial Intelligence. Harold P Edmundson. 1969. New methods in automatic extracting. Journal of the ACM, 16(2):264– 285, April. Oren Etzioni, Michele Banko, Stephen Soderland, and Daniel S. Weld. 2008. Open information extraction from the web. Communications of the ACM, 51(12):68–74, December. Ingrid Fandrych. 2008. Submorphemic elements in the formation of acronyms, blends and clippings 147. Lexis, page 105. Eibe Frank, Gordon W. Paynter, Ian H. Witten, Carl Gutwin, and Craig G. Nevill-Manning. 1999. Domain-specific keyphrase extraction. In Proceedings of the 16th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’99, pages 668–673, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Evgeniy Gabrilovich and Shaul Markovitch. 2009. Wikipedia-based semantic interpretation for natural language processing. Journal of Artificial Intelligence Research, 34(1):443–498, March. Isidoro Gil-Leiva and Adolfo Alonso-Arroyo. 2007. Keywords given by authors of scientific articles in database descriptors. Journal of the American Society for Information Science and Technology, 58(8):1175–1187. Bob Grange and D.A. Bloom. 2000. Acronyms, abbreviations and initialisms. BJU International, 86(1):1–6. James Hartley and Ronald N. Kostoff. 2003. How useful are ‘key words’ in scientific journals? Journal of Information Science, 29(5):433–438. Anette Hulth. 2003. Improved automatic keyword extraction given more linguistic knowledge. In Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing, EMNLP ’03, pages 216–223, Stroudsburg, PA, USA. Association for Computational Linguistics. A.M. Ibrahim. 1989. Acronyms observed. Professional Communication, IEEE Transactions on, 32(1):27–28, Mar. Su Nam Kim, Olena Medelyan, Min-Yen Kan, and Timothy Baldwin. 2010. Semeval-2010 task 5: Automatic keyphrase extraction from scientific articles. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 21–26. Association for Computational Linguistics. Carolyn E. Lipscomb. 2000. Medical subject headings (mesh). Bull Med Libr Assoc. 88(3): 265266. Marina Litvak and Mark Last. 2008. Graph-based keyword extraction for single-document summarization. In Proceedings of the Workshop on Multisource Multilingual Information Extraction and Summarization, MMIES ’08, pages 17–24, Stroudsburg, PA, USA. Association for Computational Linguistics. 614 Olena Medelyan, Eibe Frank, and Ian H. Witten. 2009. Human-competitive tagging using automatic keyphrase extraction. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 3 - Volume 3, EMNLP ’09, pages 1318–1327, Stroudsburg, PA, USA. Association for Computational Linguistics. Microsoft. 2015. Microsoft academic search. http://academic.research.microsoft.com. Accessed: 2015-2-26. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. In Dekang Lin and Dekai Wu, editors, Proceedings of EMNLP 2004, pages 404–411, Barcelona, Spain, July. Association for Computational Linguistics. ThuyDung Nguyen and Min-Yen Kan. 2007. Keyphrase extraction in scientific publications. In Dion Hoe-Lian Goh, Tru Hoang Cao, Ingeborg Torvik Sølvberg, and Edie Rasmussen, editors, Asian Digital Libraries. Looking Back 10 Years and Forging New Frontiers, volume 4822 of Lecture Notes in Computer Science, pages 317–326. Springer Berlin Heidelberg. Ryosuke L. Ohniwa, Aiko Hibino, and Kunio Takeyasu. 2010. Trends in research foci in life science fields over the last 30 years monitored by emerging topics. Scientometrics, 85(1):111–127. Chris D. Paice and Paul A. Jones. 1993. The identification of important concepts in highly structured technical papers. In Proceedings of the 16th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR ’93, pages 69–78, New York, NY, USA. ACM. Martin F. Porter. 2001. Snowball: A language for stemming algorithms. http://snowball.tartarus.org/texts/introduction.html. Accessed: 2015-2-26. Ariel S Schwartz and Marti A Hearst. 2003. A simple algorithm for identifying abbreviation definitions in biomedical text. Pacific Symposium on Biocomputing. Pacific Symposium on Biocomputing, page 451462. Xiaolin Shi, Jure Leskovec, and Daniel A. McFarland. 2010. Citing for high impact. In Proceedings of the 10th Annual Joint Conference on Digital Libraries, JCDL ’10, pages 49–58, New York, NY, USA. ACM. Takashi Tomokiyo and Matthew Hurst. 2003. A language model approach to keyphrase extraction. In Proceedings of the ACL 2003 Workshop on Multiword Expressions: Analysis, Acquisition and Treatment - Volume 18, MWE ’03, pages 33–40, Stroudsburg, PA, USA. Association for Computational Linguistics. Peter D. Turney. 2000. Learning algorithms for keyphrase extraction. Information Retrieval, 2(4):303–336, May. Wikipedia. 2014. Wikipedia: List of machine learning concepts. http://en.wikipedia.org/wiki/List of machine learning concepts. Accessed: 2015-2-26. Fei Wu and Daniel S. Weld. 2008. Automatically refining the wikipedia infobox ontology. In Proceedings of the 17th International Conference on World Wide Web, WWW ’08, pages 635–644, New York, NY, USA. ACM. 615
2015
59
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 53–62, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Text to 3D Scene Generation with Rich Lexical Grounding Angel Chang∗, Will Monroe∗, Manolis Savva, Christopher Potts and Christopher D. Manning Stanford University, Stanford, CA 94305 {angelx,wmonroe4,msavva}@cs.stanford.edu, {cgpotts,manning}@stanford.edu Abstract The ability to map descriptions of scenes to 3D geometric representations has many applications in areas such as art, education, and robotics. However, prior work on the text to 3D scene generation task has used manually specified object categories and language that identifies them. We introduce a dataset of 3D scenes annotated with natural language descriptions and learn from this data how to ground textual descriptions to physical objects. Our method successfully grounds a variety of lexical terms to concrete referents, and we show quantitatively that our method improves 3D scene generation over previous work using purely rule-based methods. We evaluate the fidelity and plausibility of 3D scenes generated with our grounding approach through human judgments. To ease evaluation on this task, we also introduce an automated metric that strongly correlates with human judgments. 1 Introduction We examine the task of text to 3D scene generation. The ability to map descriptions of scenes to 3D geometric representations has a wide variety of applications; many creative industries use 3D scenes. Robotics applications need to interpret commands referring to real-world environments, and the ability to visualize scenarios given highlevel descriptions is of great practical use in educational tools. Unfortunately, 3D scene design user interfaces are prohibitively complex for novice users. Prior work has shown the task remains challenging and time intensive for non-experts, even with simplified interfaces (Savva et al., 2014). ∗The first two authors are listed in alphabetical order. {...L-shaped room with walls that have 2 tones of gray..., A dark room with a pool table...} {...a multicolored table in the middle of the room , ...four red and white chairs and a colorful table, ...} Figure 1: We learn how to ground references such as “L-shaped room” to 3D models in a paired corpus of 3D scenes and natural language descriptions. Sentence fragments in bold were identified as high-weighted references to the shown objects. Language offers a convenient way for designers to express their creative goals. Systems that can interpret natural descriptions to build a visual representation allow non-experts to visually express their thoughts with language, as was demonstrated by WordsEye, a pioneering work in text to 3D scene generation (Coyne and Sproat, 2001). WordsEye and other prior work in this area (Seversky and Yin, 2006; Chang et al., 2014) used manually chosen mappings between language and objects in scenes. To our knowledge, we present the first 3D scene generation approach that learns from data how to map textual terms to objects. First, we collect a dataset of 3D scenes along with textual descriptions by people, which we contribute to the community. We then train a classifier on a scene discrimination task and extract high-weight features that ground lexical terms to 3D models. We integrate our learned lexical groundings with a rule-based scene generation approach, and we show through a humanjudgment evaluation that the combination outperforms both approaches in isolation. Finally, we introduce a scene similarity metric that correlates with human judgments. 53 There is a desk and there is a notepad on the desk. There is a pen next to the notepad. Scene Template Input Text on(o0,o1) 3D Scene o0 room on(o1,o2) Parsing o0 – category:room, modelId:420 o1 – category:desk, modelId:132 o2 – category:notepad, modelId:343 o3 – category:pen, modelId:144 on(o1,o3) next_to(o3,o2) o1 desk o3 pen o2 notepad Generation Figure 2: Illustration of the text to 3D scene generation pipeline. The input is text describing a scene (left), which we parse into an abstract scene template representation capturing objects and relations (middle). The scene template is then used to generate a concrete 3D scene visualizing the input description (right). The 3D scene is constructed by retrieving and arranging appropriate 3D models. 2 Task Description In the text to 3D scene generation task, the input is a natural language description, and the output is a 3D representation of a plausible scene that fits the description and can be viewed and rendered from multiple perspectives. More precisely, given an utterance x as input, the output is a scene y: an arrangement of 3D models representing objects at specified positions and orientations in space. In this paper, we focus on the subproblem of lexical grounding of textual terms to 3D model referents (i.e., choosing 3D models that represent objects referred to by terms in the input utterance x). We employ an intermediate scene template representation parsed from the input text to capture the physical objects present in a scene and constraints between them. This representation is then used to generate a 3D scene (Figure 2). A na¨ıve approach to scene generation might use keyword search to retrieve 3D models. However, such an approach is unlikely to generalize well in that it fails to capture important object attributes and spatial relations. In order for the generated scene to accurately reflect the input description, a deep understanding of language describing environments is necessary. Many challenging subproblems need to be tackled: physical object mention detection, estimation of object attributes such as size, extraction of spatial constraints, and placement of objects at appropriate relative positions and orientations. The subproblem of lexical grounding to 3D models has a larged impact on the quality of generated scenes, as later stages of scene generation rely on having a correctly chosen set of objects to arrange. Another challenge is that much common knowledge about the physical properties of objects and the structure of environments is rarely mentioned in natural language (e.g., that most tables are supported on the floor and in an upright orientation). Unfortunately, common 3D representations of objects and scenes used in computer graphics specify only geometry and appearance, and rarely include such information. Prior work in text to 3D scene generation focused on collecting manual annotations of object properties and relations (Rouhizadeh et al., 2011; Coyne et al., 2012), which are used to drive rule-based generation systems. Regrettably, the task of scene generation has not yet benefited from recent related work in NLP. 3 Related Work There is much prior work in image retrieval given textual queries; a recent overview is provided by Siddiquie et al. (2011). The image retrieval task bears some similarity to our task insofar as 3D scene retrieval is an approach that can approximate 3D scene generation. However, there are fundamental differences between 2D images and 3D scenes. Generation in image space has predominantly focused on composition of simple 2D clip art elements, as exemplified recently by Zitnick et al. (2013). The task of composing 3D scenes presents a much higherdimensional search space of scene configurations where finding plausible and desirable configurations is difficult. Unlike prior work in clip art generation which uses a small pre-specified set of objects, we ground to a large database of objects that can occur in various indoor environments: 12490 3D models from roughly 270 categories. 54 There is a table and there are four chairs. There are four plates and there are four sandwiches. There is a chair and a table. There is a bed and there is a nightstand next to the bed.  dinning room with four plates, four chairs, and four sandwiches  dark room with two small windows. A rectangular table seating four is in the middle of the room with plates set. There is a set of two gray double doors on another wall.  i see a rectangular table in the center of the room. There are 4 chairs around the table and 4 plates on the table  There is a chair and a circular table in the middle of a floral print room.  a corner widow room with a a table and chair sitting to the east side.  There's a dresser in the corner of the room, and a yellow table with a brown wooden chair.  There is a bed with three pillows and a bedside table next to it.  The room appears to be a bedroom. A blue bed and white nightstand are pushed against the furthest wall. A window is on the left side.  A dark bedroom with a queen bed with blue comforter and three pillows. There is a night stand. One wall is decorated with a large design and another wall has three large windows. Figure 3: Scenes created by participants from seed description sentences (top). Additional descriptions provided by other participants from the created scene (bottom). Our dataset contains around 19 scenes per seed sentence, for a total of 1129 scenes. Scenes exhibit variation in the specific objects chosen and their placement. Each scene is described by 3 or 4 other people, for a total of 4358 descriptions. 3.1 Text to Scene Systems Pioneering work on the SHRDLU system (Winograd, 1972) demonstrated linguistic manipulation of objects in 3D scenes. However, the discourse domain was restricted to a micro-world with simple geometric shapes to simplify parsing and grounding of natural language input. More recently, prototype text to 3D scene generation systems have been built for broader domains, most notably the WordsEye system (Coyne and Sproat, 2001) and later work by Seversky and Yin (2006). Chang et al. (2014) showed it is possible to learn spatial priors for objects and relations directly from 3D scene data. These systems use manually defined mappings between language and their representation of the physical world. This prevents generalization to more complex object descriptions, variations in word choice and spelling, and other languages. It also forces users to use unnatural language to express their intent (e.g., the table is two feet to the south of the window). We propose reducing reliance on manual lexicons by learning to map descriptions to objects from a corpus of 3D scenes and associated textual descriptions. While we find that lexical knowledge alone is not sufficient to generate high-quality scenes, a learned approach to lexical grounding can be used in combination with a rule-based system for handling compositional knowledge, resulting in better scenes than either component alone. 3.2 Related Tasks Prior work has generated sentences that describe 2D images (Farhadi et al., 2010; Kulkarni et al., 2011; Karpathy et al., 2014) and referring expressions for specific objects in images (FitzGerald et al., 2013; Kazemzadeh et al., 2014). However, generating scenes is currently out of reach for purely image-based approaches. 3D scene representations serve as an intermediate level of structure between raw image pixels and simpler microcosms (e.g., grid and block worlds). This level of structure is amenable to the generation task but still realistic enough to present a variety of challenges associated with natural scenes. A related line of work focuses on grounding referring expressions to referents in 3D worlds with simple colored geometric shapes (Gorniak and Roy, 2004; Gorniak and Roy, 2005). More recent work grounds text to object attributes such as color and shape in images (Matuszek et al., 2012; Krishnamurthy and Kollar, 2013). Golland et al. (2010) ground spatial relationship language in 3D scenes (e.g., to the left of, behind) by learning from pairwise object relations provided by crowdworkers. In contrast, we ground general descriptions to a wide variety of possible objects. The objects in our scenes represent a broader space of possible referents than the first two lines of work. Unlike the latter work, our descriptions are provided as unrestricted free-form text, rather than filling in specific templates of object references and fixed spatial relationships. 55 4 Dataset We introduce a new dataset of 1128 scenes and 4284 free-form natural language descriptions of these scenes.1 To create this training set, we used a simple online scene design interface that allows users to assemble scenes using available 3D models of common household objects (each model is annotated with a category label and has a unique ID). We used a set of 60 seed sentences describing simple configurations of interior scenes as prompts and asked workers on the Amazon Mechanical Turk crowdsourcing platform to create scenes corresponding to these seed descriptions. To obtain more varied descriptions for each scene, we asked other workers to describe each scene. Figure 3 shows examples of seed description sentences, 3D scenes created by people given those descriptions, and new descriptions provided by others viewing the created scenes. We manually examined a random subset of the descriptions (approximately 10%) to eliminate spam and unacceptably poor descriptions. When we identified an unacceptable description, we also examined all other descriptions by the same worker, as most poor descriptions came from a small number of workers. From our sample, we estimate that less than 3% of descriptions were spam or unacceptably incoherent. To reflect natural use, we retained minor typographical and grammatical errors. Despite the small set of seed sentences, the Turker-provided scenes exhibit much variety in the specific objects used and their placements within the scene. Over 600 distinct 3D models appear in at least one scene, and more than 40% of nonroom objects are rotated from their default orientation, despite the fact that this requires an extra manipulation in the scene-building interface. The descriptions collected for these scenes are similarly diverse and usually differ substantially in length and content from the seed sentences.2 5 Model To create a model for generating scene templates from text, we train a classifier to learn lexical 1Available at http://nlp.stanford.edu/data/ text2scene.shtml. 2Mean 26.2 words, SD 17.4; versus mean 16.6, SD 7.2 for the seed sentences. If one considers seed sentences to be the “reference,” the macro-averaged BLEU score (Papineni et al., 2002) of the Turker descriptions is 12.0. groundings. We then combine our learned lexical groundings with a rule-based scene generation model. The learned groundings allow us to select better models, while the rule-based model offers simple compositionality for handling coreference and relationships between objects. 5.1 Learning lexical groundings To learn lexical mappings from examples, we train a classifier on a related grounding task and extract the weights of lexical features for use in scene generation. This classifier learns from a “discrimination” version of our scene dataset, in which the scene in each scene–description pair is hidden among four other distractor scenes sampled uniformly at random. The training objective is to maximize the L2-regularized log likelihood of this scene discrimination dataset under a one-vs.all logistic regression model, using each true scene and each distractor scene as one example (with true/distractor as the output label). The learned model uses binary-valued features indicating the co-occurrence of a unigram or bigram and an object category or model ID. For example, features extracted from the scene-description pair shown in Figure 2 would include the tuples (desk, modelId:132) and (the notepad, category:notepad). To evaluate our learned model’s performance at discriminating scenes, independently of its use in scene generation, we split our scene and description corpus (augmented with distractor scenes) randomly into train, development, and test portions 70%-15%-15% by scene. Using only model ID features, the classifier achieves a discrimination accuracy of 0.715 on the test set; adding features that use object categories as well as model IDs improves accuracy to 0.833. 5.2 Rule-based Model We use the rule-based parsing component described in Chang et al. (2014). This system incorporates knowledge that is important for scene generation and not addressed by our learned model (e.g., spatial relationships and coreference). In Section 5.3, we describe how we use our learned model to augment this model. This rule-based approach is a three-stage process using established NLP systems: 1) The input text is split into multiple sentences and parsed using the Stanford CoreNLP pipeline (Manning et 56 red cup round yellow table green room black top tan love seat black bed open window Figure 4: Some examples extracted from the top 20 highest-weight features in our learned model: lexical terms from the descriptions in our scene corpus are grounded to 3D models within the scene corpus. al., 2014). Head words of noun phrases are identified as candidate object categories, filtered using WordNet (Miller, 1995) to only include physical objects. 2) References to the same object are collapsed using the Stanford coreference system. 3) Properties are attached to each object by extracting other adjectives and nouns in the noun phrase. These properties are later used to query the 3D model database. We use the same model database as Chang et al. (2014) and also extract spatial relations between objects using the same set of dependency patterns. 5.3 Combined Model The rule-based parsing model is limited in its ability to choose appropriate 3D models. We integrate our learned lexical groundings with this model to build an improved scene generation system. Identifying object categories Using the rulebased model, we extract all noun phrases as potential objects. For each noun phrase p, we extract features {ϕi} and compute the score of a category c being described by the noun phrase as the sum of the feature weights from the learned model in Section 5.1: Score(c | p) = ∑ ϕi∈ϕ(p) θ(i,c), where θ(i,c) is the weight for associating feature ϕi with category c. From categories with a score higher than Tc = 0.5, we select the best-scoring category as the representative for the noun phrase. If no category’s score exceeds Tc, we use the head of the noun phrase for the object category. 3D model selection For each object mention detected in the description, we use the feature weights from the learned model to select a specific object to add to the scene. After using dependency rules to extract spatial relationships and descriptive terms associated with the object, we compute the score of a 3D model m given the category c and text category text category chair Chair round RoundTable lamp Lamp laptop Laptop couch Couch fruit Bowl vase Vase round table RoundTable sofa Couch laptop Computer bed Bed bookshelf Bookcase Table 1: Top groundings of lexical terms in our dataset to categories of 3D models in the scenes. a set of descriptive terms d using a similar sum of feature weights. As the rule-based system may not accurately identify the correct set of terms d, we augment the score with a sum of feature weights over the entire input description x: m = arg max m∈{c} λd ∑ ϕi∈ϕ(d) θ(i,m) + λx ∑ ϕi∈ϕ(x) θ(i,m) For the results shown here, λd = 0.75 and λx = 0.25. We select the best-scoring 3D model with positive score. If no model has positive score, we assume the object mention was spurious and omit the object. 6 Learned lexical groundings By extracting high-weight features from our learned model, we can visualize specific models to which lexical terms are grounded (see Figure 4). These features correspond to high frequency text– 3D model pairs within the scene corpus. Table 1 shows some of the top learned lexical groundings to model database categories. We are able to recover many simple identity mappings without using lexical similarity features, and we capture several lexical variants (e.g., sofa for Couch). A few erroneous mappings reflect common cooccurrences; for example, fruit is mapped to Bowl due to fruit typically being observed in bowls in our dataset. 57 Description In between the doors and the window, there is a black couch with red cushions, two white pillows, and one black pillow. In front of the couch, there is a wooden coffee table with a glass top and two newspapers. Next to the table, facing the couch, is a wooden folding chair. random rule learned combo A round table is in the center of the room with four chairs around the table. There is a double window facing west. A door is on the east side of the room. There is a desk and a computer. Seed sentence: MTurk sentences: Figure 5: Qualitative comparison of generated scenes for three input descriptions (one Seed and two MTurk), using the four different methods: random, learned, rule, combo. 7 Experimental Results We conduct a human judgment experiment to compare the quality of generated scenes using the approaches we presented and baseline methods. To evaluate whether lexical grounding improves scene generation, we need a method to arrange the chosen models into 3D scenes. Since 3D scene layout is not a focus of our work, we use an approach based on prior work in 3D scene synthesis and text to scene generation (Fisher et al., 2012; Chang et al., 2014), simplified by using sampling rather than a hill climbing strategy. Conditions We compare five conditions: {random, learned, rule, combo, human}. The random condition represents a baseline which synthesizes a scene with randomly-selected models, while human represents scenes created by people. The learned condition takes our learned lexical groundings, picks the four3 most likely objects, and generates a scene based on them. The rule and combo conditions use scenes generated by the rule-based approach and the combined model, respectively. Descriptions We consider two sets of input descriptions: {Seeds, MTurk}. The Seeds descriptions are 50 of the initial seed sentences from which workers were asked to create scenes. These seed sentences were simple (e.g., There is a desk 3The average number of objects in a scene in our humanbuilt dataset was 3.9. and a chair, There is a plate on a table) and did not have modifiers describing the objects. The MTurk descriptions are much more descriptive and exhibit a wider variety in language (including misspellings and ungrammatical constructs). Our hypothesis was that the rule-based system would perform well on the simple Seeds descriptions, but it would be insufficient for handling the complexities of the more varied MTurk descriptions. For these more natural descriptions, we expected our combination model to perform better. Our experimental results confirm this hypothesis. 7.1 Qualitative Evaluation Figure 5 shows a qualitative comparison of 3D scenes generated from example input descriptions using each of the four methods. In the top row, the rule-based approach selects a CPU chassis for computer, while combo and learned select a more iconic monitor. In the bottom row, the rule-based approach selects two newspapers and places them on the floor, while the combined approach correctly selects a coffee table with two newspapers on it. The learned model is limited to four objects and does not have a notion of object identity, so it often duplicates objects. 7.2 Human Evaluation We performed an experiment in which people rated the degree to which scenes match the textual descriptions from which they were generated. 58 Figure 6: Screenshot of the UI for rating scenedescription match. Such ratings are a natural way to evaluate how well our approach can generate scenes from text: in practical use, a person would provide an input description and then judge the suitability of the resulting scenes. For the MTurk descriptions, we randomly sampled 100 descriptions from the development split of our dataset. Procedure During the experiment, each participant was shown 30 pairs of scene descriptions and generated 3D scenes drawn randomly from all five conditions. All participants provided 30 responses each for a total of 5040 scene-description ratings. Participants were asked to rate how well the generated scene matched the input description on a 7point Likert scale, with 1 indicating a poor match and 7 a very good one (see Figure 6). In a separate task with the same experimental procedure, we asked other participants to rate the overall plausibility of each generated scene without a reference description. This plausibility rating measures whether a method can generate plausible scenes irrespective of the degree to which the input description is matched. We used Amazon Mechanical Turk to recruit 168 participants for rating the match of scenes to descriptions and 63 participants for rating scene plausibility. Design The experiment followed a withinsubjects factorial design. The dependent measure was the Likert rating. Since per-participant and per-scene variance on the rating is not accounted for by a standard ANOVA, we use a mixed effects model which can account for both fixed effects and random effects to determine the statistical signifimethod Seeds MTurk random 2.03 (1.88 – 2.18) 1.68 (1.57 – 1.79) learned 3.51 (3.23 – 3.77) 2.61 (2.40 – 2.84) rule 5.44 (5.26 – 5.61) 3.15 (2.91 – 3.40) combo 5.23 (4.96 – 5.44) 3.73 (3.48 – 3.95) human 6.06 (5.90 – 6.19) 5.87 (5.74 – 6.00) Table 2: Average scene-description match ratings across sentence types and methods (95% C.I.). cance of our results.4 We treat the participant and the specific scene as random effects of varying intercept, and the method condition as the fixed effect. Results There was a significant effect of the method condition on the scene-description match rating: χ2(4, N = 5040) = 1378.2, p < 0.001. Table 2 summarizes the average scene-description match ratings and 95% confidence intervals for all sentence type–condition pairs. All pairwise differences between ratings were significant under Wilcoxon rank-sum tests with the BonferroniHolm correction (p < 0.05). The scene plausibility ratings, which were obtained independent of descriptions, indicated that the only significant difference in plausibility was between scenes created by people (human) and all the other conditions. We see that for the simple seed sentences both the rule-based and combined model approach the quality of human-created scenes. However, all methods have significantly lower ratings for the more complex MTurk sentences. In this more challenging scenario, the combined model is closest to the manually created scenes and significantly outperforms both rule-based and learned models in isolation. 7.3 Error Analysis Figure 7 shows some common error cases in our system. The top left scene was generated with the rule-based method, the top right with the learned method, and the bottom two with the combined approach. At the top left, there is an erroneous selection of concrete object category (wood logs) for the four wood chairs reference in the input description, due to an incorrect head identification. At top right, the learned model identifies the 4We used the lme4 R package and optimized fit with maximum log-likelihood (Baayen et al., 2008). We report significance results using the likelihood-ratio (LR) test. 59 Figure 7: Common scene generation errors. From top left clockwise: Wood table and four wood chairs in the center of the room; There is a black and brown desk with a table lamp and flowers; There is a white desk, a black chair, and a lamp in the corner of the room; There in the middle is a table, on the table is a cup. presence of brown desk and lamp but erroneously picks two desks and two lamps (since we always pick the top four objects). The scene on the bottom right does not obey the expressed spatial constraints (in the corner of the room) since our system does not understand the grounding of room corner and that the top right side is not a good fit due to the door. In the bottom left, incorrect coreference resolution results in two tables for There in the middle is a table, on the table is a cup. 7.4 Scene Similarity Metric We introduce an automated metric for scoring scenes given a scene template representation, the aligned scene template similarity (ASTS). Given a one-to-one alignment A between the nodes of a scene template and the objects in a scene, let the alignment penalty J(A) be the sum of the number of unaligned nodes in the scene template and the number of unaligned objects in the scene. For the aligned nodes, we compute a similarity score S per node pair (n, n′) where S(n, n′) = 1 if the model ID matches, S(n, n′) = 0.5 if only the category matches and 0 otherwise. We define the ASTS of a scene with respect to a scene template to be the maximum alignment method Human ASTS random 1.68 0.08 learned 2.61 0.23 rule 3.15 0.32 combo 3.73 0.44 Table 3: Average human ratings (out of 7) and aligned scene template similarity scores. score over all such alignments: ASTS(s, z) = max A ∑ (n,n′)∈A S(n, n′) J(A) + |A| . With this definition, we compare average ASTS scores for each method against average human ratings (Table 3). We test the correlation of the ASTS metric against human ratings using Pearson’s r and Kendall’s rank correlation coefficient rτ. We find that ASTS and human ratings are strongly correlated (r = 0.70, rτ = 0.49, p < 0.001). This suggests ASTS scores could be used to train and algorithmically evaluate scene generation systems that map descriptions to scene templates. 8 Future Work Many error cases in our generated scenes resulted from not interpreting spatial relations. An obvious improvement would be to expand our learned lexical grounding approach to include spatial relations. This would help with spatial language that is not handled by the rule-based system’s dependency patterns (e.g., around, between, on the east side). One approach would be to add spatial constraints to the definition of our scene similarity score and use this improved metric in training a semantic parser to generate scene templates. To choose objects, our current system uses information obtained from language–object cooccurrences and sparse manually-annotated category labels; another promising avenue for achieving better lexical grounding is to propagate category labels using geometric and image features to learn the categories of unlabeled objects. Novel categories can also be extracted from Turker descriptions. These new labels could be used to improve the annotations in our 3D model database, enabling a wider range of object types to be used in scene generation. 60 Our approach learns object references without using lexical similarity features or a manuallyassembled lexicon. Thus, we expect that our method for lexical grounding can facilitate development of text-to-scene systems in other languages. However, additional data collection and experiments are necessary to confirm this and identify challenges specific to other languages. The necessity of handling omitted information suggests that a model incorporating a more sophisticated theory of pragmatic inference could be beneficial. Another important problem not addressed here is the role of context and discourse in interpreting scene descriptions. For example, several of our collected descriptions include language imagining embodied presence in the scene (e.g., The wooden table is to your right, if you’re entering the room from the doors). 9 Conclusion Prior work in 3D scene generation relies on purely rule-based methods to map object references to concrete 3D objects. We introduce a dataset of 3D scenes annotated with natural language descriptions which we believe will be of great interest to the research community. Using this corpus of scenes and descriptions, we present an approach that learns from data how to ground textual descriptions to objects. To evaluate how our grounding approach impacts generated scene quality, we collect human judgments of generated scenes. In addition, we present a metric for automatically comparing generated scene templates to scenes, and we show that it correlates strongly with human judgments. We demonstrate that rich lexical grounding can be learned directly from an unaligned corpus of 3D scenes and natural language descriptions, and that our model can successfully ground lexical terms to concrete referents, improving scene generation over baselines adapted from prior work. Acknowledgments We thank Katherine Breeden for valuable feedback. The authors gratefully acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Deep Exploration and Filtering of Text (DEFT) Program under Air Force Research Laboratory (AFRL) contract no. FA875013-2-0040, the National Science Foundation under grant no. IIS 1159679, the Department of the Navy, Office of Naval Research, under grant no. N00014-10-1-0109, and the Stanford Graduate Fellowship fund. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation, the Office of Naval Research, DARPA, AFRL, or the US government. References R.H. Baayen, D.J. Davidson, and D.M. Bates. 2008. Mixed-effects modeling with crossed random effects for subjects and items. Journal of Memory and Language, 59(4):390–412. Angel X. Chang, Manolis Savva, and Christopher D. Manning. 2014. Learning spatial knowledge for text to 3D scene generation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Bob Coyne and Richard Sproat. 2001. WordsEye: an automatic text-to-scene conversion system. In Proceedings of the 28th Annual Conference on Computer Graphics and Interactive Techniques. Bob Coyne, Alexander Klapheke, Masoud Rouhizadeh, Richard Sproat, and Daniel Bauer. 2012. Annotation tools and knowledge representation for a text-to-scene system. Proceedings of COLING 2012: Technical Papers. Ali Farhadi, Mohsen Hejrati, Mohammad Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, and David Forsyth. 2010. Every picture tells a story: Generating sentences from images. In Computer Vision–ECCV 2010. Matthew Fisher, Daniel Ritchie, Manolis Savva, Thomas Funkhouser, and Pat Hanrahan. 2012. Example-based synthesis of 3D object arrangements. ACM Transactions on Graphics (TOG), 31(6):135. Nicholas FitzGerald, Yoav Artzi, and Luke Zettlemoyer. 2013. Learning distributions over logical forms for referring expression generation. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Dave Golland, Percy Liang, and Dan Klein. 2010. A game-theoretic approach to generating spatial descriptions. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Peter Gorniak and Deb Roy. 2004. Grounded semantic composition for visual scenes. Journal of Artificial Intelligence Research (JAIR), 21(1):429–470. Peter Gorniak and Deb Roy. 2005. Probabilistic grounding of situated speech using plan recognition and reference resolution. In Proceedings of the 7th International Conference on Multimodal Interfaces. 61 Andrej Karpathy, Armand Joulin, and Li Fei-Fei. 2014. Deep fragment embeddings for bidirectional image sentence mapping. In Advances in Neural Information Processing Systems. Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, and Tamara L. Berg. 2014. ReferItGame: Referring to objects in photographs of natural scenes. In Proceedings of Empirical Methods in Natural Language Processing (EMNLP). Jayant Krishnamurthy and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connecting natural language to the physical world. Transactions of the Association for Computational Linguistics, 1:193–206. Girish Kulkarni, Visruth Premraj, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2011. Baby talk: Understanding and generating simple image descriptions. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David McClosky. 2014. The Stanford CoreNLP natural language processing toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics: System Demonstrations. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded attribute learning. In International Conference on Machine Learning (ICML). George A. Miller. 1995. WordNet: A lexical database for English. Communications of the ACM, 38(11):39–41. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: A method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Masoud Rouhizadeh, Margit Bowler, Richard Sproat, and Bob Coyne. 2011. Collecting semantic data by Mechanical Turk for the lexical knowledge resource of a text-to-picture generating system. In Proceedings of the Ninth International Conference on Computational Semantics. Manolis Savva, Angel X. Chang, Gilbert Bernstein, Christopher D. Manning, and Pat Hanrahan. 2014. On being the right scale: Sizing large collections of 3D models. In SIGGRAPH Asia 2014 Workshop on Indoor Scene Understanding: Where Graphics meets Vision. Lee M. Seversky and Lijun Yin. 2006. Real-time automatic 3D scene generation from natural language voice and text descriptions. In Proceedings of the 14th Annual ACM International Conference on Multimedia. Behjat Siddiquie, Rog´erio Schmidt Feris, and Larry S. Davis. 2011. Image ranking and retrieval based on multi-attribute queries. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Terry Winograd. 1972. Understanding natural language. Cognitive Psychology, 3(1):1–191. C. Lawrence Zitnick, Devi Parikh, and Lucy Vanderwende. 2013. Learning the visual interpretation of sentences. In IEEE International Conference on Computer Vision (ICCV). 62
2015
6
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 616–625, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Sentiment-Aspect Extraction based on Restricted Boltzmann Machines Linlin Wang1, Kang Liu2∗, Zhu Cao1, Jun Zhao2 and Gerard de Melo1 1Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China 2National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China {ll-wang13, cao-z13}@mails.tsinghua.edu.cn, {kliu, jzhao}@nlpr.ia.ac.cn, [email protected] Abstract Aspect extraction and sentiment analysis of reviews are both important tasks in opinion mining. We propose a novel sentiment and aspect extraction model based on Restricted Boltzmann Machines to jointly address these two tasks in an unsupervised setting. This model reflects the generation process of reviews by introducing a heterogeneous structure into the hidden layer and incorporating informative priors. Experiments show that our model outperforms previous state-of-the-art methods. 1 Introduction Nowadays, it is commonplace for people to express their opinion about various sorts of entities, e.g., products or services, on the Internet, especially in the course of e-commerce activities. Analyzing online reviews not only helps customers obtain useful product information, but also provide companies with feedback to enhance their products or service quality. Aspect-based opinion mining enables people to consider much more finegrained analyses of vast quantities of online reviews, perhaps from numerous different merchant sites. Thus, automatic identification of aspects of entities and relevant sentiment polarities in Big Data is a significant and urgent task (Liu, 2012; Pang and Lee, 2008; Popescu and Etzioni, 2005). Identifying aspect and analyzing sentiment words from reviews has the ultimate goal of discerning people’s opinions, attitudes, emotions, etc. towards entities such as products, services, organizations, individuals, events, etc. In this context, aspect-based opinion mining, also known as feature-based opinion mining, aims at extracting and summarizing particular salient aspects of entities and determining relevant sentiment polarities ∗Corresponding Author: Kang Liu ([email protected]) from reviews (Hu and Liu, 2004). Consider reviews of computers, for example. A given computer’s components (e.g., hard disk, screen) and attributes (e.g., volume, size) are viewed as aspects to be extracted from the reviews, while sentiment polarity classification consists in judging whether an opinionated review expresses an overall positive or negative opinion. Regarding aspect identification, previous methods can be divided into three main categories: rule-based, supervised, and topic model-based methods. For instance, association rule-based methods (Hu and Liu, 2004; Liu et al., 1998) tend to focus on extracting product feature words and opinion words but neglect connecting product features at the aspect level. Existing rule-based methods typically are not able to group the extracted aspect terms into categories. Supervised (Jin et al., 2009; Choi and Cardie, 2010) and semisupervised learning methods (Zagibalov and Carroll, 2008; Mukherjee and Liu, 2012) were introduced to resolve certain aspect identification problems. However, supervised training requires handlabeled training data and has trouble coping with domain adaptation scenarios. Hence, unsupervised methods are often adopted to avoid this sort of dependency on labeled data. Latent Dirichlet Allocation, or LDA for short, (Blei et al., 2003) performs well in automatically extracting aspects and grouping corresponding representative words into categories. Thus, a number of LDA-based aspect identification approaches have been proposed in recent years (Brody and Elhadad, 2010; Titov and McDonald, 2008; Zhao et al., 2010). Still, these methods have several important drawbacks. First, inaccurate approximations of the distribution over topics may reduce the computational accuracy. Second, mixture models are unable to exploit the co-occurrence of topics to yield high probability predictions for words that are sharper than the distributions predicted by in616 dividual topics (Hinton and Salakhutdinov, 2009). To overcome the weaknesses of existing methods and pursue the promising direction of jointly learning aspect and sentiment, we present the novel Sentiment-Aspect Extraction RBM (SERBM) model to simultaneously extract aspects of entities and relevant sentiment-bearing words. This two-layer structure model is inspired by conventional Restricted Boltzmann machines (RBMs). In previous work, RBMs with shared parameters (RSMs) have achieved great success in capturing distributed semantic representations from text (Hinton and Salakhutdinov, 2009). Aiming to make the most of their ability to model latent topics while also accounting for the structured nature of aspect opinion mining, we propose replacing the standard hidden layers of RBMs with a novel heterogeneous structure. Three different types of hidden units are used to represent aspects, sentiments, and background words, respectively. This modification better reflects the generative process for reviews, in which review words are generated not only from the aspect distribution but also affected by sentiment information. Furthermore, we blend background knowledge into this model using priors and regularization to help it acquire more accurate feature representations. After m-step Contrastive Divergence for parameter estimation, we can capture the required data distribution and easily compute the posterior distribution over latent aspects and sentiments from reviews. In this way, aspects and sentiments are jointly extracted from reviews, with limited computational effort. This model is hence a promising alternative to more complex LDAbased models presented previously. Overall, our main contributions are as follows: 1. Compared with previous LDA-based methods, our model avoids inaccurate approximations and captures latent aspects and sentiment both adequately and efficiently. 2. Our model exploits RBMs’ advantage in properly modeling distributed semantic representations from text, but also introduces heterogeneous structure into the hidden layer to reflect the generative process for online reviews. It also uses a form of regularization to incorporate prior knowledge into the model. Due these modifications, our model is very well-suited for solving aspect-based opinion mining tasks. 3. The optimal weight matrix of this RBM model can exactly reflect individual word features toward aspects and sentiment, which is hard to achieve with LDA-based models due to the mixture model sharing mechanism. 4. Last but not the least, this RBM model is capable of jointly modeling aspect and sentiment information together. 2 Related Work We summarize prior state-of-the-art models for aspect extraction. In their seminal work, Hu and Liu (2004) propose the idea of applying classical information extraction to distinguish different aspects in online reviews. Methods following their approach exploit frequent noun words and dependency relations to extract product features without supervision (Zhuang et al., 2006; Liu et al., 2005; Somasundaran and Wiebe, 2009). These methods work well when the aspect is strongly associated with a single noun, but obtain less satisfactory results when the aspect emerges from a combination of low frequency items. Additionally, rule-based methods have a common shortcoming in failing to group extracted aspect terms into categories. Supervised learning methods (Jin et al., 2009; Choi and Cardie, 2010; Jakob and Gurevych, 2010; Kobayashi et al., 2007) such as Hidden Markov Models, one-class SVMs, and Conditional Random Fields have been widely used in aspect information extraction. These supervised approaches for aspect identification are generally based on standard sequence labeling techniques. The downside of supervised learning is its requirement of large amounts of hand-labeled training data to provide enough information for aspect and opinion identification. Subsequent studies have proposed unsupervised learning methods, especially LDA-based topic modeling, to classify aspects of comments. Specific variants include the Multi-Grain LDA model (Titov and McDonald, 2008) to capture local rateable aspects, the two-step approach to detect aspect-specific opinion words (Brody and Elhadad, 2010), the joint sentiment/topic model (JST) by Lin and He (2009), the topic-sentiment mixture model with domain adaption (Mei et al., 2007), which treats sentiment as different topics, and MaxEnt-LDA (Zhao et al., 2010), which integrates a maximum entropy approach into LDA. 617 h1 v1 hF vD vi v1 vi vD W1,1 W1,F Wi,F WD,F WD,1 Wi,1 ! ! ! ! hj v hi Latent Topics W1 W2 Figure 1: RBM Schema However, these LDA-based methods can only adopt inaccurate approximations for the posterior distribution over topics rather than exact inference. Additionally, as a mixture model, LDA suffers from the drawbacks mentioned in Section 1 that are common to all mixture models. 3 Model In order to improve over previous work, we first introduce a basic RBM-based model and then describe our modified full model. 3.1 Basic RBM-based Model Restricted Boltzmann Machines can be used for topic modeling by relying on the structure shown in Figure 1. As shown on the left side of the figure, this model is a two-layer neural network composed of one visible layer and one hidden layer. The visible layer consists of a softmax over discrete visible units for words in the text, while the hidden layer captures its topics. More precisely, the visible layer is represented as a K × D matrix v, where K is the dictionary size, and D is the document length. Here, if visible unit i in v takes the k-th value, we set vk i = 1. The hidden layer can be expressed as h ∈{0, 1}F , where F is the number of hidden layer nodes, corresponding to topics. The right side of Figure 1 is another way of viewing the network, with a single multinomial visible unit (Hinton and Salakhutdinov, 2009). The energy function of the model can be defined as E(v, h) = − D X i=1 F X j=1 K X k=1 W k ijhjvk i − D X i=1 K X k=1 vk i bk i − F X j=1 hjaj, (1) where W k ij specifies the connection weight from the i-th visible node of value k to the j-th hidden node, bk i corresponds to a bias of vk i , and aj corresponds to a bias of hj. The probability of the input layer v is defined as P(v) = 1 Z X h exp(−E(v, h)), (2) where Z is the partition function to normalize the probability. The conditional probabilities from the hidden to the visible layer and from the visible to the hidden one are given in terms of a softmax and logistic function, respectively, i.e. P( vk i = 1 | h) = exp bk i + FP j=1 hjW k ij ! K P q=1 exp bq i + FP j=1 hjW q ij !, P( hj = 1 | v) = σ aj + D X i=1 K X k=1 vk i W k ij ! , (3) where σ(x) = 1/(1 + exp(−x)) is the logistic function. 3.2 Our Sentiment-Aspect Extraction model While the basic RBM-based method provides a simple model of latent topics, real online reviews require a more fine-grained model, as they consist of opinion aspects and sentiment information. Therefore, aspect identification is a different task from regular topic modeling and the basic RBMbased model may not perform well in aspect extraction for reviews. To make the most of the ability of the basic RBM-based model in extracting latent topics, and obtain an effective method that is well-suited to solve aspect identification tasks, we present our novel Sentiment-Aspect Extraction RBM model. 3.2.1 Generative Perspective From a generative perspective, product reviews can be regarded as follows. Every word in a review text may describe a specific aspect (e.g. “expensive” for the price aspect), or an opinion (e.g. “amazing” for a positive sentiment and “terrible” for a negative one), or some irrelevant background information (e.g. “Sunday”). In a generative model, a word may be generated from a latent aspect variable, a sentiment variable, or a background variable. Also, there may exist certain relations between such latent variables. 618 v1 h1 vD hF hj W1,1 W1,F Wx,y WD,F WD,1 ! id1_DT id2_NN id3_CC id4_NNS id5_NN id6_JJ id7_VBZ id8_JJ Sentence POS ! v1 vD hi hk !! ! h1 hi !! hj hk hF hF Aspect Sentiment Background φ2 φ4 φ1 φ id_i"#"word count_i D v1 vD ! vD ! φ v1 W1,F WD,1 Joint Learning with φ Prior Knowledge Aspect_i Sentiment_i φ3 Figure 2: Sentiment-Aspect Extraction Model 3.2.2 Structure To simulate this generative process for reviews, we adapt the standard RBM structure to reflect the aspect-sentiment identification task. Undirected Model. Our Sentiment-Aspect Extraction model structure is illustrated in Figure 2. Compared to standard RBMs, a crucial difference is that hidden units now have a heterogeneous structure instead of being homogeneous as in the standard basic RBM model. In particular, we rely on three types of hidden units, representing aspect, sentiment, and background, respectively. The first two types are self-explanatory, while the background units are intended to reflect the kind of words that do not contribute much to the aspect or sentiment information of review documents. Since the output of the hidden units is a re-encoding of the information in the visible layer, we obtain a deeper representation and a more precise expression of information in the input reviews. Thus, this approach enables the model to learn multi-faceted information with a simple yet expressive structure. To formalize this, we denote bvk = PD i=1 vk i as the count for the k-th word, where D is the document length. The energy function can then be defined as follows: E(v, h) = − F X j=1 K X k=1 W k j hjbvk − K X k=1 bvkbk − F X j=1 hjaj, (4) where W k j denotes the weight between the k-th visible unit and the j-th hidden unit. The conditional probability from visible to hidden unit can be expressed as: P(hj = 1|v) = σ(aj + K X k=1 bvkW k j ). (5) In an RBM, every hidden unit can be activated or restrained by visible units. Thus, every visible unit has a potential contribution towards the activation of a given hidden unit. The probability of whether a given visible unit affects a specific hidden unit is described as follows (cf. appendix for details): P(hj = 1 | bvk) =P(hj = 1 | h−j, bvk) =σ(aj + W k j bvk). (6) Under this architecture, this equation can be explained as the conditional probability from visible unit k to hidden unit j (softmax of words to aspect or sentiment). According to Eq. 6, the conditional probability for the k-th word feature towards the j-th aspect or sentiment p(hj = 1 | vk) is a monotone function of W k j , the (k, j)-th entry of the optimal weight matrix. Thus, the optimal weight matrix of this RBM model can directly reflect individual word features toward aspects and sentiment. Informative Priors. To improve the ability of the model to extract aspects and identify sentiments, we capture priors for words in reviews and incorporate this information into the learning process of our Sentiment-Aspect Extraction model. We regularize our model based on these priors to constrain the aspect modeling and improve its accuracy. Figure 3 provides an example of how such priors can be applied to a sentence, with φi representing the prior knowledge. Research has found that most aspect words are nouns (or noun phrases), and sentiment is often expressed with adjectives. This additional information has been utilized in previous work on aspect extraction (Hu and Liu, 2004; Benamara et al., 2007; Pang et al., 2002). Inspired by this, we first rely on Part of Speech (POS) Tagging to identify nouns and adjectives. For all noun words, we first calculate their term frequency (TF) in the review corpus, and then compute their inverse document frequency (IDF) from an external Google n-gram corpus1. Finally, we rank their TF∗IDF 1http://books.google.com/ngrams/datasets 619 The_DT delicious_JJ dishes_NN in_IN the_DT restaurant_NN taste_VBZ great_JJ Sentence Part of Speech Tagging φ2 Aspect φ3 φ1 Aspect_i Sentiment_i φ4 Sentiment Figure 3: Prior Feature Extraction values and assign them an aspect prior probability pA,vk, indicating their general probability of being an aspect word. This TF-IDF approach is motivated by the following intuitions: the most frequently mentioned candidates in reviews have the highest probability of being an opinion target and false target words are non-domain specific and frequently appear in a general text corpus (Liu et al., 2012; Liu et al., 2013). For all adjective words, if the words are also included in the online sentiment resource SentiWordNet2, we assign prior probability ps,vk to suggest that these words are generally recognized as sentiment words. Apart from these general priors, we obtain a small amount of fine-grained information as another type of prior knowledge. This fine-grained prior knowledge serves to indicate the probability of a known aspect word belonging to a specific aspect, denoted as pAj,vk and an identified sentiment word bearing positive or negative sentiment, denoted as pSj,vk. For instance, “salad” is always considered as a general word that belongs to the specific aspect food, and “great” is generally considered a positive sentiment word. To extract pAj,vk, we apply regular LDA on the review dataset. Since the resulting topic clusters are unlabeled, we manually assign top k words from the topics to the target aspects. We thus obtain fine-grained prior probabilities to suggest these words as belonging to specific aspects. To obtain pSj,vk, we rely on SentiWordNet and sum up the probabilities of an identified sentiment word being positive or negative sentiment-bearing, respectively. Then we adopt the corresponding percentage value as a fine-grained specific sentiment prior. It is worthwhile to mention that the priors are not a compulsory component. However, the procedure for obtaining priors is generic and can eas2http://sentiwordnet.isti.cnr.it ily be applied to any given dataset. Furthermore, we only obtain such fine-grained prior knowledge for a small amount of words in review sentences and rely on the capability of model itself to deal with the remaining words. 3.2.3 Objective Function We now construct an objective function for our SERBM model that includes regularization based on the priors defined above in Section 3.2.2. Suppose that the training set is S = v1, v2, . . . , vns, where ns is the number of training objects. Each element has the form vi = (vi 1, vi 2, . . . , vi K)D, where i = 1, 2, . . . , ns, and these data points are assumed to be independent and identically distributed. We define the following novel log-likelihood function ln LS, with four forms of regularization corresponding to the four kinds of priors: ln LS = ln ns Y i=1 P(vi) − ns X i=1 " λ1 ln F1−1 Y j=1 Y k∈R1  P(hj = 1 | bvk) −pAj,vk 2 + λ2 ln Y k∈R2  F1 X j=1 P(hj = 1 | bvk) −pA,vk 2 + λ3 ln F2+1 Y j=F2 Y k∈R3  P(hj = 1 | bvk) −pSj,vk 2 + λ4 ln Y k∈R4 F2+1 X j=F2 P(hj = 1 | bvk) −pS,vk 2# (7) Here, P(hj = 1 | bvk) stands for the probability of a given input word belonging to a specific hidden unit. We assume all λi > 0 for i = 1 . . . 4, while F1 and F2 are integers for the offsets within the hidden layer. Units up to index F1 capture aspects, with the last one reserved for miscellaneous Other Aspects, while units from F2 capture the sentiment (with F1 = F2 + 1 < F for convenience). Our goal will be to maximize the log-likelihood ln LS in order to adequately model the data, in accordance with the regularization. 3.2.4 Training We use Stochastic Gradient Descent (SGD) to find suitable parameters that maximize the objective function. Given a single training instance v from 620 the training set S, we obtain ∂ln L ∂θ = ∂ln P(v) ∂θ −λ1 F1−1 X j=1 X k∈R1 ∂ln  P(hj = 1 | bvk) −pAj,vk 2 ∂θ −λ2 X k∈R2 ∂ln hPF1 j=1 P(hj = 1 | bvk) −pA,vk i2 ∂θ −λ3 F2+1 X j=F2 X k∈R3 ∂ln  P(hj = 1 | bvk) −pSj,vk 2 ∂θ −λ4 X k∈R4 ∂ln hPF2+1 j=F2 P(hj = 1 | bvk) −pS,vk i2 ∂θ (8) where θ = {W, aj, bi} stands for the parameters. Given N documents {vn}N n=1, the first term in the log-likelihood function with respect to W is: 1 N N X n=1 ∂ln P(vn) ∂W k j = ED1[ˆvkhj] −ED2[ˆvkhj]. (9) Here, D1[·] and D2[·] represent the expectation with respect to the data distribution and the distribution obtained by this model, respectively. We use Contrastive Divergence (CD) to approximate ED2[ˆvkhj] (Hinton and Salakhutdinov, 2009). Due to the m steps of transfer between input and hidden layers in a CD-m run of the algorithm, the two types of hidden units, aspect and sentiment, will jointly affect input reviews together with the connection matrix between the two layers. Finally, we consider the partial derivative of the entire log-likelihood function with respect to the parameter W. Denoting ln ∂L ∂W as ∇W, in each step we update ∇W k j by adding λ h P(hj = 1|v(0))v(0) k −P(hj = 1|v(cdm))v(cdm) k i −λ1 F1−1 X j=1 X k∈R1 2Gjbvk (1 + Gj)2( 1 1+Gj −pAj,vk) −λ2 X k∈R2 2bvk PF1 j=1 1 (1+Gj) −pA,vk F1 X j=1 Gj (1 + Gj)2 −λ3 F2+1 X j=F2 X k∈R3 2Gjbvk (1 + Gj)2( 1 1+Gj −pSj,vk) −λ4 X k∈R4 2bvk PF2+1 j=F2 1 (1+Gj) −pS,vk F2+1 X j=F2 Gj (1 + Gj)2 , where Gj=e−(aj+W k j bvk) for convenience, and v(cdm) is the result from the CD-m steps. 4 Experiments We present a series of experiments to evaluate our model’s performance on the aspect identification and sentiment classification tasks. 4.1 Data For this evaluation, we rely on a restaurant review dataset widely adopted by previous work (Ganu et al., 2009; Brody and Elhadad, 2010; Zhao et al., 2010), which contains 1,644,923 tokens and 52,574 documents in total. Documents in this dataset are annotated with one or more labels from a gold standard label set S = {Food, Staff, Ambience, Price, Anecdote, Miscellaneous}. Following the previous studies, we select reviews with less than 50 sentences and remove stop words. The Stanford POS Tagger3 is used to distinguish noun and adjective words from each other. We later also rely on the Polarity dataset v2.04 to conduct an additional experiment on sentiment classification in order to better assess the model’s overall performance. This dataset focuses on movie reviews and consists of 1000 positive review documents and 1000 negative ones. It has also been used in the experiments by Lin & He (2009), among others. 4.2 Aspect Identification We first apply our novel model to identify aspects from documents in the restaurant review dataset. 4.2.1 Experimental Setup For the experimental setup, we use ten hidden units in our Sentiment-Aspect Extraction RBM (SERBM), where units 0–6 capture aspects, units 7–8 capture sentiment information, and unit 9 stores background information. In particular, we fix hidden units 0–6 to represent the target aspects Food, Staff, Ambience, Price, Ambience, Miscellaneous, and Other Aspects, respectively. Units 7–8 represent positive and negative sentiment, respectively. The remaining hidden unit is intended to capture irrelevant background information. Note that the structure of our model needs no modifications for new reviews. There are two cases for datasets from a new domain. If the new 3http://nlp.stanford.edu/software/tagger.shtml 4http://www.cs.cornell.edu/people/pabo/ movie-review-data/ 621 Method RBM RSM SERBM PPL 49.73 39.19 21.18 Table 1: Results in terms of perplexity dataset has a gold standard label set, then we assign one hidden unit to represent each label in the gold standard set. If not, our model only obtains the priors pA,vk and pS,vk, and the aspect set can be inferred as in the work of Zhao et al. (2010). For evaluation, following previous work, the annotated data is fed into our unsupervised model, without any of the corresponding labels. The model is then evaluated in terms of how well its prediction matches the true labels. As for hyperparameter optimization, we use the perplexity scores as defined in Eq. 10 to find the optimal hyperparameters. As a baseline, we also re-implement standard RBMs and the RSM model (Hinton and Salakhutdinov, 2009) to process this same restaurant review dataset and identify aspects for every document in this dataset under the same experimental conditions. We recall that RSM is a similar undirected graphical model that models topics from raw text. Last but not the least, we conduct additional comparative experiments, including with LocLDA (Brody and Elhadad, 2010), MaxEnt-LDA (Zhao et al., 2010) and the SAS model (Mukherjee and Liu, 2012) to extract aspects for this restaurant review dataset under the same experimental conditions. In the following, we use the abbreviated name MELDA to stand for the MaxEnt LDA method. 4.2.2 Evaluation Brody and Elhadad (2010) and Zhao et al. (2010) utilize three aspects to perform a quantitative evaluation and only use sentences with a single label for evaluation to avoid ambiguity. The three major aspects chosen from the gold standard labels are S = {Food, Staff, Ambience}. The evaluation criterion essentially is to judge how well the prediction matches the true label, resulting in Precision, Recall, and F1 scores. Besides these, we consider perplexity (PPL) as another evaluation metric to analyze the aspect identification quality. The average test perplexity PPL over words is defined as: exp −1 N N X n=1 1 Dn log P(vn) ! , (10) Aspect Method Precision Recall F1 RBM 0.753 0.680 0.715 RSM 0.718 0.736 0.727 food LocLDA 0.898 0.648 0.753 MELDA 0.874 0.787 0.828 SAS 0.867 0.772 0.817 SERBM 0.891 0.854 0.872 RBM 0.436 0.567 0.493 RSM 0.430 0.310 0.360 staff LocLDA 0.804 0.585 0.677 MELDA 0.779 0.540 0.638 SAS 0.774 0.556 0.647 SERBM 0.819 0.582 0.680 RBM 0.489 0.439 0.463 RSM 0.498 0.441 0.468 ambi LocLDA 0.603 0.677 0.638 -ence MELDA 0.773 0.588 0.668 SAS 0.780 0.542 0.640 SERBM 0.805 0.592 0.682 Table 2: Aspect identification results in terms of precision, recall, and F1 scores on the restaurant reviews dataset where N is the number of documents, Dn represents the word number, and vn stands for the wordcount of document n. Average perplexity results are reported in Table 1, while Precision, Recall, and F1 evaluation results for aspect identification are given in Table 2. Some LDA-based methods require manual mappings for evaluation, which causes difficulties in obtaining a fair PPL result, so a few methods are only considered in Table 2. To illustrate the differences, in Table 3, we list representative words for aspects identified by various models and highlight words without an obvious association or words that are rather unspecific in bold. 4.2.3 Discussion Considering the results from Table 1 and the RBM, RSM, and SERBM-related results from Table 2, we find that the RSM performs better than the regular RBM model on this aspect identification task. However, the average test perplexity is greatly reduced even further by the SERBM method, resulting in a relative improvement by 45.96% over the RSM model. Thus, despite the elaborate modification, our SERBM inherits RBMs’ ability in modeling latent topics, but significantly outperforms other RBM family models 622 Aspect RSM RBM Loc-LDA ME-LDA SAS SERBM great menu,drink chicken chocolate food,menu salad,cheese dessert food,pizza menu,salad dessert dessert dessert beef chicken good cream drinks chicken Food drink,BBQ seafood fish ice,cake chicken sauce menu good drinks desserts cheeses rice,pizza delicious sandwich wine,sauce good beers,salad food good soup rice bread delicious dish fish flavor cheese cheese rice sushi,menu service staff service service staff,slow service room helpful staff,waiter staff,food waitress staff,friendly slow waiter attentive wait,waiters attentive waitress Staff table friendly busy waiter helpful waitstaff quick good,attentive slow,friendly place service attentive waitress slow,service table restaurant minutes waitresses friendly restaurant wait waitress wait,friendly servers waiter minutes minutes waitstaff waiter minutes atmosphere place great room place atmosphere music atmosphere atmosphere dining decor atmosphere place cozy wonderful tables great scene dinner door music bar good place Ambience romantic cute seating place romantic tables room bar experience decor tables outside comfortable great relaxed scene bar area tables seating bar space decor ambiance good experience room area great outdoor ambiance romantic outside table music romantic,cozy Table 3: Aspects and representative words on the aspect identification task. In Table 2, we also observe that SERBM achieves a higher accuracy compared with other state-of-the-art aspect identification methods. More specifically, it is evident that our SERBM model outperforms previous methods’ F1 scores. Compared with MELDA, the F1 scores for the SERBM lead to relative improvements of 5.31%, 6.58%, and 2.10%, respectively, for the Food, Staff, and Ambience aspects. Compared with SAS, the F1 scores yield relative improvements by 6.73%, 5.10%, and 6.56%, respectively, on those same aspects. As for Precision and Recall, the SERBM also achieves a competitive performance compared with other methods in aspect identification. Finally, we conclude from Table 3 that the SERBM method has the capability of extracting word with obvious aspect-specific features and makes less errors compared with other models. 4.3 Sentiment Classification We additionally conduct two experiments to evaluate the model’s performance on sentiment classification. 4.3.1 Comparison with SentiWordNet We assign a sentiment score to every document in the restaurant review dataset based on the output of SERBM’s sentiment-type hidden units. To analyze SERBM’s performance in sentiment classification, we compare these results with SentiWordNet5, a well-known sentiment lexicon. For this SentiWordNet baseline, we consult the resource to obtain a sentiment label for every word and aggregate these to judge the sentiment information of an entire review document in terms of the sum of word-specific scores. Table 4 provides a comparison between SERBM and SentiWordNet, with Accuracy as the evaluation metric. We observe in Table 4 that the sentiment 5http://sentiwordnet.isti.cnr.it 623 Method SentiWordNet SERBM Accuracy 0.703 0.788 Table 4: Accuracy for SERBM and SentiWordNet classification accuracy on the restaurant review dataset sees a relative improvement by 12.1% with SERBM over the SentiWordNet baseline. 4.3.2 Comparison with JST We additionally utilize the Polarity dataset v2.0 to conduct an additional sentiment classification experiment in order to assess SERBM’s performance more thoroughly. We compare SERBM with the advanced joint sentiment/topic model (JST) by Lin & He (2009). For the JST and the TryingJST methods only, we use the filtered subjectivity lexicon (subjective MR) as prior information, containing 374 positive and 675 negative entries, which is the same experimental setting as in Lin & He (2009). For SERBM, we use the same general setup as before except for the fact that aspectspecific priors are not used here. Table 5 provides the sentiment classification accuracies on both the overall dataset and on the subsets for each polarity, where pos. and neg. refer to the positive and negative reviews in the dataset, respectively. Method overall pos. neg. JST(%) 84.6 96.2 73 Trying-JST(%) 82 89.2 74.8 SERBM(%) 89.1 92.0 86.2 Table 5: Accuracy for SERBM and JST In Table 5, we observe that SERBM outperforms JST both in terms of the overall accuracy and for the positive/negative-specific subsets. SERBM yields a relative improvement in the overall accuracy by 5.31% over JST and by 8.66% over Trying-JST. 5 Conclusion In this paper, we have proposed the novel Sentiment-Aspect Extraction RBM (SERBM) model to jointly extract review aspects and sentiment polarities in an unsupervised setting. Our approach modifies the standard RBM model by introducing a heterogeneous structure into the hidden layer and incorporating informative priors into the model. Our experimental results show that this model can outperform LDA-based methods. Hence, our work opens up the avenue of utilizing RBM-based undirected graphical models to solve aspect extraction and sentiment classification tasks as well as other unsupervised tasks with similar structure. Appendix The joint probability distribution is defined as pθ(v, h) = 1 Zθ eEθ(v,h), (11) where Zθ is the partition function. In conjunction with Eq. 1, we obtain Eθ(bvk, h) = −bibvk − F X j=1 ajhj − F X j=1 hjW k j bvk (12) Then, we can obtain the derivation in Eq. 6. P(hj = 1 | bvk) =P(hj = 1 | h−j, bvk) =P(hj = 1, h−j, bvk) P(h−j, bvk) = P(hj = 1, h−j, bvk) P(hj = 1, h−j, bvk) + P(hj = 0, h−j, bvk) = 1 Z e−E(hj=1,h−j,bvk) 1 Z e−E(hj=1,h−j,bvk) + 1 Z e−E(hj=0,h−j,bvk) = e−E(hj=1,h−j,bvk) e−E(hj=1,h−j,bvk) + e−E(hj=0,h−j,bvk) = 1 1 + e−E(hj=0,h−j,bvk)+E(hj=1,h−j,bvk) =σ(aj + W k j bvk) (13) Acknowledgments The research at IIIS was supported by China 973 Program Grants 2011CBA00300, 2011CBA00301 and NSFC Grants 61033001, 61361136003, 61450110088. The research at CASIA was supported by the National Basic Research Program of China Grant No. 2012CB316300 and NSFC Grants 61272332 and 61202329. 624 References Farah Benamara, Carmine Cesarano, Antonio Picariello, Diego Reforgiato Recupero, and Venkatramana Subrahmanian. 2007. Sentiment analysis: Adjectives and adverbs are better than adjectives alone. In Proceedings of ICWSM 2007. David Blei, Andrew Ng, and Michael Jordan. 2003. Latent dirichlet allocation. Journal of Machine Learning Research, 3:993–1022. Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of NAACL-HLT 2010, pages 804– 812. Association for Computational Linguistics. Yejin Choi and Claire Cardie. 2010. Hierarchical sequential learning for extracting opinions and their attributes. In Proceedings of ACL 2010, pages 269– 274. Association for Computational Linguistics. Gayatree Ganu, Noemie Elhadad, and Am´elie Marian. 2009. Beyond the stars: Improving rating predictions using review text content. In Proceedings of WebDB 2009, pages 1–6. Geoffrey Hinton and Ruslan Salakhutdinov. 2009. Replicated softmax: an undirected topic model. In Advances in Neural Information Processing Systems (NIPS 2009), pages 1607–1614. Minqing Hu and Bing Liu. 2004. Mining and summarizing customer reviews. In Proceedings of KDD 2004, pages 168–177, New York, NY, USA. ACM. Niklas Jakob and Iryna Gurevych. 2010. Extracting opinion targets in a single-and cross-domain setting with Conditional Random Fields. In Proceedings of EMNLP 2010, pages 1035–1045. Association for Computational Linguistics. Wei Jin, Hung Hay Ho, and Rohini K Srihari. 2009. A novel lexicalized HMM-based learning framework for Web opinion mining. In Proceedings of ICML 2009, pages 465–472. Nozomi Kobayashi, Kentaro Inui, and Yuji Matsumoto. 2007. Extracting aspect-evaluation and aspect-of relations in opinion mining. In Proceedings of EMNLP-CoNLL, pages 1065–1074. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the 18th ACM Conference on Information and Knowledge Management (CIKM 2009), pages 375–384. ACM. Bing Liu, Wynne Hsu, and Yiming Ma. 1998. Integrating classification and association rule mining. In Proceedings of KDD 1998, pages 80–86. AAAI Press. Bing Liu, Minqing Hu, and Junsheng Cheng. 2005. Opinion observer: analyzing and comparing opinions on the Web. In Proceedings of the 14th international conference on World Wide Web, pages 342– 351. ACM. Kang Liu, Liheng Xu, and Jun Zhao. 2012. Opinion target extraction using word-based translation model. In Proceedings of EMNLP-CoNLL 2012, pages 1346–1356. Kang Liu, Liheng Xu, Yang Liu, and Jun Zhao. 2013. Opinion target extraction using partially-supervised word alignment model. In Proceedings of the 23rd International Joint Conference on Artificial Intelligence (IJCAI 2013), pages 2134–2140. AAAI Press. Bing Liu. 2012. Sentiment analysis and opinion mining. Synthesis Lectures on Human Language Technologies, 5(1):1–167. Qiaozhu Mei, Xu Ling, Matthew Wondra, Hang Su, and ChengXiang Zhai. 2007. Topic sentiment mixture: modeling facets and opinions in weblogs. In Proceedings of the 16th international conference on the World Wide Web (WWW 2007), pages 171–180. ACM. Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of ACL 2012, pages 339–348. Bo Pang and Lillian Lee. 2008. Opinion mining and sentiment analysis. Foundations and trends in information retrieval, 2(1-2):1–135. Bo Pang, Lillian Lee, and Shivakumar Vaithyanathan. 2002. Thumbs up?: Sentiment classification using machine learning techniques. In Proceedings of EMNLP 2002, pages 79–86. Association for Computational Linguistics. Ana-Maria Popescu and Orena Etzioni. 2005. Extracting product features and opinions from reviews. In Proceedings of HLT/EMNLP 2005. Springer. Swapna Somasundaran and Janyce Wiebe. 2009. Recognizing stances in online debates. In Proceedings of ACL-IJCNLP 2009, pages 226–234. Association for Computational Linguistics. Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the 17th international conference on the World Wide Web (WWW 2008), pages 111–120. ACM. Taras Zagibalov and John Carroll. 2008. Automatic seed word selection for unsupervised sentiment classification of Chinese text. In Proceedings of COLING 2008, pages 1073–1080. Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a MaxEnt-LDA hybrid. In Proceedings of EMNLP 2010, pages 56–65. Association for Computational Linguistics. Li Zhuang, Feng Jing, and Xiao-Yan Zhu. 2006. Movie review mining and summarization. In Proceedings of the 15th ACM international Conference on Information and Knowledge Management (CIKM 2006), pages 43–50. ACM. 625
2015
60
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 626–634, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Classifying Relations by Ranking with Convolutional Neural Networks C´ıcero Nogueira dos Santos IBM Research 138/146 Av. Pasteur Rio de Janeiro, RJ, Brazil [email protected] Bing Xiang IBM Watson 1101 Kitchawan Yorktown Heights, NY, USA [email protected] Bowen Zhou IBM Watson 1101 Kitchawan Yorktown Heights, NY, USA [email protected] Abstract Relation classification is an important semantic processing task for which state-ofthe-art systems still rely on costly handcrafted features. In this work we tackle the relation classification task using a convolutional neural network that performs classification by ranking (CR-CNN). We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform experiments using the the SemEval-2010 Task 8 dataset, which is designed for the task of classifying the relationship between two nominals marked in a sentence. Using CRCNN, we outperform the state-of-the-art for this dataset and achieve a F1 of 84.1 without using any costly handcrafted features. Additionally, our experimental results show that: (1) our approach is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals. 1 Introduction Relation classification is an important Natural Language Processing (NLP) task which is normally used as an intermediate step in many complex NLP applications such as question-answering and automatic knowledge base construction. Since the last decade there has been increasing interest in applying machine learning approaches to this task (Zhang, 2004; Qian et al., 2009; Rink and Harabagiu, 2010). One reason is the availability of benchmark datasets such as the SemEval-2010 task 8 dataset (Hendrickx et al., 2010), which encodes the task of classifying the relationship between two nominals marked in a sentence. The following sentence contains an example of the Component-Whole relation between the nominals “introduction” and “book”. The [introduction]e1 in the [book]e2 is a summary of what is in the text. Some recent work on relation classification has focused on the use of deep neural networks with the aim of reducing the number of handcrafted features (Socher et al., 2012; Zeng et al., 2014; Yu et al., 2014). However, in order to achieve state-ofthe-art results these approaches still use some features derived from lexical resources such as WordNet or NLP tools such as dependency parsers and named entity recognizers (NER). In this work, we propose a new convolutional neural network (CNN), which we name Classification by Ranking CNN (CR-CNN), to tackle the relation classification task. The proposed network learns a distributed vector representation for each relation class. Given an input text segment, the network uses a convolutional layer to produce a distributed vector representation of the text and compares it to the class representations in order to produce a score for each class. We propose a new pairwise ranking loss function that makes it easy to reduce the impact of artificial classes. We perform an extensive number of experiments using the the SemEval-2010 Task 8 dataset. Using CRCNN, and without the need for any costly handcrafted feature, we outperform the state-of-the-art for this dataset. Our experimental results are evidence that: (1) CR-CNN is more effective than CNN followed by a softmax classifier; (2) omitting the representation of the artificial class Other improves both precision and recall; and (3) using only word embeddings as input features is enough to achieve state-of-the-art results if we consider only the text between the two target nominals. 626 The remainder of the paper is structured as follows. Section 2 details the proposed neural network. In Section 3, we present details about the setup of experimental evaluation, and then describe the results in Section 4. In Section 5, we discuss previous work in deep neural networks for relation classification and for other NLP tasks. Section 6 presents our conclusions. 2 The Proposed Neural Network Given a sentence x and two target nouns, CR-CNN computes a score for each relation class c ∈C. For each class c ∈C, the network learns a distributed vector representation which is encoded as a column in the class embedding matrix W classes. As detailed in Figure 1, the only input for the network is the tokenized text string of the sentence. In the first step, CR-CNN transforms words into realvalued feature vectors. Next, a convolutional layer is used to construct a distributed vector representations of the sentence, rx. Finally, CR-CNN computes a score for each relation class c ∈C by performing a dot product between r⊺ x and W classes. 2.1 Word Embeddings The first layer of the network transforms words into representations that capture syntactic and semantic information about the words. Given a sentence x consisting of N words x = {w1, w2, ..., wN}, every word wn is converted into a real-valued vector rwn. Therefore, the input to the next layer is a sequence of real-valued vectors embx = {rw1, rw2, ..., rwN } Word representations are encoded by column vectors in an embedding matrix W wrd ∈Rdw×|V |, where V is a fixed-sized vocabulary. Each column W wrd i ∈Rdw corresponds to the word embedding of the i-th word in the vocabulary. We transform a word w into its word embedding rw by using the matrix-vector product: rw = W wrdvw where vw is a vector of size |V | which has value 1 at index w and zero in all other positions. The matrix W wrd is a parameter to be learned, and the size of the word embedding dw is a hyperparameter to be chosen by the user. 2.2 Word Position Embeddings In the task of relation classification, information that is needed to determine the class of a relation Figure 1: CR-CNN: a Neural Network for classifying by ranking. between two target nouns normally comes from words which are close to the target nouns. Zeng et al. (2014) propose the use of word position embeddings (position features) which help the CNN by keeping track of how close words are to the target nouns. These features are similar to the position features proposed by Collobert et al. (2011) for the Semantic Role Labeling task. In this work we also experiment with the word position embeddings (WPE) proposed by Zeng et al. (2014). The WPE is derived from the relative distances of the current word to the target noun1 and noun2. For instance, in the sentence shown in Figure 1, the relative distances of left to car and plant are -1 and 2, respectively. As in (Collobert et al., 2011), each relative distance is mapped to a vector of dimension dwpe, which is initialized with random numbers. dwpe is a hyperparameter of the network. Given the vectors wp1 and wp2 for the word w with respect to the targets noun1 and noun2, the position embedding of w is given by 627 the concatenation of these two vectors, wpew = [wp1, wp2]. In the experiments where word position embeddings are used, the word embedding and the word position embedding of each word are concatenated to form the input for the convolutional layer, embx = {[rw1, wpew1], [rw2, wpew2], ..., [rwN , wpewN ]}. 2.3 Sentence Representation The next step in the NN consists in creating the distributed vector representation rx for the input sentence x. The main challenges in this step are the sentence size variability and the fact that important information can appear at any position in the sentence. In recent work, convolutional approaches have been used to tackle these issues when creating representations for text segments of different sizes (Zeng et al., 2014; Hu et al., 2014; dos Santos and Gatti, 2014) and characterlevel representations of words of different sizes (dos Santos and Zadrozny, 2014). Here, we use a convolutional layer to compute distributed vector representations of the sentence. The convolutional layer first produces local features around each word in the sentence. Then, it combines these local features using a max operation to create a fixed-sized vector for the input sentence. Given a sentence x, the convolutional layer applies a matrix-vector operation to each window of size k of successive windows in embx = {rw1, rw2, ..., rwN }. Let us define the vector zn ∈ Rdwk as the concatenation of a sequence of k word embeddings, centralized in the n-th word: zn = (rwn−(k−1)/2, ..., rwn+(k−1)/2)⊺ In order to overcome the issue of referencing words with indices outside of the sentence boundaries, we augment the sentence with a special padding token replicated k −1 2 times at the beginning and the end. The convolutional layer computes the j-th element of the vector rx ∈Rdc as follows: [rx]j = max 1<n<N  f W 1zn + b1 j where W 1 ∈Rdc×dwk is the weight matrix of the convolutional layer and f is the hyperbolic tangent function. The same matrix is used to extract local features around each word window of the given sentence. The fixed-sized distributed vector representation for the sentence is obtained by using the max over all word windows. Matrix W 1 and vector b1 are parameters to be learned. The number of convolutional units dc, and the size of the word context window k are hyperparameters to be chosen by the user. It is important to note that dc corresponds to the size of the sentence representation. 2.4 Class embeddings and Scoring Given the distributed vector representation of the input sentence x, the network with parameter set θ computes the score for a class label c ∈C by using the dot product sθ(x)c = r⊺ x[W classes]c where W classes is an embedding matrix whose columns encode the distributed vector representations of the different class labels, and [W classes]c is the column vector that contains the embedding of the class c. Note that the number of dimensions in each class embedding must be equal to the size of the sentence representation, which is defined by dc. The embedding matrix W classes is a parameter to be learned by the network. It is initialized by randomly sampling each value from an uniform distribution: U (−r, r), where r = r 6 |C| + dc . 2.5 Training Procedure Our network is trained by minimizing a pairwise ranking loss function over the training set D. The input for each training round is a sentence x and two different class labels y+ ∈C and c−∈C, where y+ is a correct class label for x and c−is not. Let sθ(x)y+ and sθ(x)c−be respectively the scores for class labels y+ and c−generated by the network with parameter set θ. We propose a new logistic loss function over these scores in order to train CR-CNN: L = log(1 + exp(γ(m+ −sθ(x)y+)) + log(1 + exp(γ(m−+ sθ(x)c−)) (1) where m+ and m−are margins and γ is a scaling factor that magnifies the difference between the score and the margin and helps to penalize more on the prediction errors. The first term in the right side of Equation 1 decreases as the score sθ(x)y+ increases. The second term in the right 628 side decreases as the score sθ(x)c−decreases. Training CR-CNN by minimizing the loss function in Equation 1 has the effect of training to give scores greater than m+ for the correct class and (negative) scores smaller than −m−for incorrect classes. In our experiments we set γ to 2, m+ to 2.5 and m−to 0.5. We use L2 regularization by adding the term β∥θ∥2 to Equation 1. In our experiments we set β to 0.001. We use stochastic gradient descent (SGD) to minimize the loss function with respect to θ. Like some other ranking approaches that only update two classes/examples at every training round (Weston et al., 2011; Gao et al., 2014), we can efficiently train the network for tasks which have a very large number of classes. This is an advantage over softmax classifiers. On the other hand, sampling informative negative classes/examples can have a significant impact in the effectiveness of the learned model. In the case of our loss function, more informative negative classes are the ones with a score larger than −m−. The number of classes in the relation classification dataset that we use in our experiments is small. Therefore, in our experiments, given a sentence x with class label y+, the incorrect class c− that we choose to perform a SGD step is the one with the highest score among all incorrect classes c−= arg max c ∈C; c̸=y+ sθ(x)c. For tasks where the number of classes is large, we can fix a number of negative classes to be considered at each example and select the one with the largest score to perform a gradient step. This approach is similar to the one used by Weston et al. (2014) to select negative examples. We use the backpropagation algorithm to compute gradients of the network. In our experiments, we implement the CR-CNN architecture and the backpropagation algorithm using Theano (Bergstra et al., 2010). 2.6 Special Treatment of Artificial Classes In this work, we consider a class as artificial if it is used to group items that do not belong to any of the actual classes. An example of artificial class is the class Other in the SemEval 2010 relation classification task. In this task, the artificial class Other is used to indicate that the relation between two nominals does not belong to any of the nine relation classes of interest. Therefore, the class Other is very noisy since it groups many different types of relations that may not have much in common. An important characteristic of CR-CNN is that it makes it easy to reduce the effect of artificial classes by omitting their embeddings. If the embedding of a class label c is omitted, it means that the embedding matrix W classes does not contain a column vector for c. One of the main benefits from this strategy is that the learning process focuses on the “natural” classes only. Since the embedding of the artificial class is omitted, it will not influence the prediction step, i.e., CR-CNN does not produce a score for the artificial class. In our experiments with the SemEval-2010 relation classification task, when training with a sentence x whose class label y = Other, the first term in the right side of Equation 1 is set to zero. During prediction time, a relation is classified as Other only if all actual classes have negative scores. Otherwise, it is classified with the class which has the largest score. 3 Experimental Setup 3.1 Dataset and Evaluation Metric We use the SemEval-2010 Task 8 dataset to perform our experiments. This dataset contains 10,717 examples annotated with 9 different relation types and an artificial relation Other, which is used to indicate that the relation in the example does not belong to any of the nine main relation types. The nine relations are Cause-Effect, Component-Whole, Content-Container, EntityDestination, Entity-Origin, Instrument-Agency, Member-Collection, Message-Topic and ProductProducer. Each example contains a sentence marked with two nominals e1 and e2, and the task consists of predicting the relation between the two nominals taking into consideration the directionality. That means that the relation CauseEffect(e1,e2) is different from the relation CauseEffect(e2,e1), as shown in the examples below. More information about this dataset can be found in (Hendrickx et al., 2010). The [war]e1 resulted in other collateral imperial [conquests]e2 as well. ⇒Cause-Effect(e1,e2) The [burst]e1 has been caused by water hammer [pressure]e2. ⇒Cause-Effect(e2,e1) The SemEval-2010 Task 8 dataset is already partitioned into 8,000 training instances and 2,717 test instances. We score our systems by using the SemEval-2010 Task 8 official scorer, which computes the macro-averaged F1-scores for the nine 629 actual relations (excluding Other) and takes the directionality into consideration. 3.2 Word Embeddings Initialization The word embeddings used in our experiments are initialized by means of unsupervised pre-training. We perform pre-training using the skip-gram NN architecture (Mikolov et al., 2013) available in the word2vec tool. We use the December 2013 snapshot of the English Wikipedia corpus to train word embeddings with word2vec. We preprocess the Wikipedia text using the steps described in (dos Santos and Gatti, 2014): (1) removal of paragraphs that are not in English; (2) substitution of non-western characters for a special character; (3) tokenization of the text using the tokenizer available with the Stanford POS Tagger (Toutanova et al., 2003); (4) removal of sentences that are less than 20 characters long (including white spaces) or have less than 5 tokens. (5) lowercase all words and substitute each numerical digit by a 0. The resulting clean corpus contains about 1.75 billion tokens. 3.3 Neural Network Hyper-parameter We use 4-fold cross-validation to tune the neural network hyperparameters. Learning rates in the range of 0.03 and 0.01 give relatively similar results. Best results are achieved using between 10 and 15 training epochs, depending on the CR-CNN configuration. In Table 1, we show the selected hyperparameter values. Additionally, we use a learning rate schedule that decreases the learning rate λ according to the training epoch t. The learning rate for epoch t, λt, is computed using the equation: λt = λ t . Parameter Parameter Name Value dw Word Emb. size 400 dwpe Word Pos. Emb. size 70 dc Convolutinal Units 1000 k Context Window size 3 λ Initial Learning Rate 0.025 Table 1: CR-CNN Hyperparameters 4 Experimental Results 4.1 Word Position Embeddings and Input Text Span In the experiments discussed in this section we assess the impact of using word position embeddings (WPE) and also propose a simpler alternative approach that is almost as effective as WPEs. The main idea behind the use of WPEs in relation classification task is to give some hint to the convolutional layer of how close a word is to the target nouns, based on the assumption that closer words have more impact than distant words. Here we hypothesize that most of the information needed to classify the relation appear between the two target nouns. Based on this hypothesis, we perform an experiment where the input for the convolutional layer consists of the word embeddings of the word sequence {we1 −1, ..., we2 + 1} where e1 and e2 correspond to the positions of the first and the second target nouns, respectively. In Table 2 we compare the results of different CR-CNN configurations. The first column indicates whether the full sentence was used (Yes) or whether the text span between the target nouns was used (No). The second column informs if the WPEs were used or not. It is clear that the use of WPEs is essential when the full sentence is used, since F1 jumps from 74.3 to 84.1. This effect of WPEs is reported by (Zeng et al., 2014). On the other hand, when using only the text span between the target nouns, the impact of WPE is much smaller. With this strategy, we achieve a F1 of 82.8 using only word embeddings as input, which is a result as good as the previous state-of-the-art F1 of 83.0 reported in (Yu et al., 2014) for the SemEval2010 Task 8 dataset. This experimental result also suggests that, in this task, the CNN works better for short texts. All experiments reported in the next sections use CR-CNN with full sentence and WPEs. Full Word Prec. Rec. F1 Sentence Position Yes Yes 83.7 84.7 84.1 No Yes 83.3 83.9 83.5 No No 83.4 82.3 82.8 Yes No 78.1 71.5 74.3 Table 2: Comparison of different CR-CNN configurations. 630 4.2 Impact of Omitting the Embedding of the artificial class Other In this experiment we assess the impact of omitting the embedding of the class Other. As we mentioned above, this class is very noisy since it groups many different infrequent relation types. Its embedding is difficult to define and therefore brings noise into the classification process of the natural classes. In Table 3 we present the results comparing the use and omission of embedding for the class Other. The two first lines of results present the official F1, which does not take into account the results for the class Other. We can see that by omitting the embedding of the class Other both precision and recall for the other classes improve, which results in an increase of 1.4 in the F1. These results suggest that the strategy we use in CR-CNN to avoid the noise of artificial classes is effective. Use embedding Class Prec. Rec. F1 of class Other No All 83.7 84.7 84.1 Yes All 81.3 84.3 82.7 No Other 52.0 48.7 50.3 Yes Other 60.1 48.7 53.8 Table 3: Impact of not using an embedding for the artificial class Other. In the two last lines of Table 3 we present the results for the class Other. We can note that while the recall for the cases classified as Other remains 48.7, the precision significantly decreases from 60.1 to 52.0 when the embedding of the class Other is not used. That means that more cases from natural classes (all) are now been classified as Other. However, as both the precision and the recall of the natural classes increase, the cases that are now classified as Other must be cases that are also wrongly classified when the embedding of the class Other is used. 4.3 CR-CNN versus CNN+Softmax In this section we report experimental results comparing CR-CNN with CNN+Softmax. In order to do a fair comparison, we’ve implemented a CNN+Softmax and trained it with the same data, word embeddings and WPEs used in CR-CNN. Concretely, our CNN+Softmax consists in getting the output of the convolutional layer, which is the vector rx in Figure 1, and giving it as input for a softmax classifier. We tune the parameters of CNN+Softmax by using a 4-fold cross-validation with the training set. Compared to the hyperparameter values for CR-CNN presented in Table 1, the only difference for CNN+Softmax is the number of convolutional units dc, which is set to 400. In Table 4 we compare the results of CRCNN and CNN+Softmax. CR-CNN outperforms CNN+Softmax in both precision and recall, and improves the F1 by 1.6. The third line in Table 4 shows the result reported by Zeng et al. (2014) when only word embeddings and WPEs are used as input to the network (similar to our CNN+Softmax). We believe that the word embeddings employed by them is the main reason their result is much worse than that of CNN+Softmax. We use word embeddings of size 400 while they use word embeddings of size 50, which were trained using much less unlabeled data than we did. Neural Net. Prec. Rec. F1 CR-CNN 83.7 84.7 84.1 CNN+SoftMax 82.1 83.1 82.5 CNN+SoftMax 78.9 (Zeng et al., 2014) Table 4: Comparison of results of CR-CNN and CNN+Softmax. 4.4 Comparison with the State-of-the-art In Table 5 we compare CR-CNN results with results recently published for the SemEval-2010 Task 8 dataset. Rink and Harabagiu (2010) present a support vector machine (SVM) classifier that is fed with a rich (traditional) feature set. It obtains an F1 of 82.2, which was the best result at SemEval-2010 Task 8. Socher et al. (2012) present results for a recursive neural network (RNN) that employs a matrix-vector representation to every node in a parse tree in order to compose the distributed vector representation for the complete sentence. Their method is named the matrix-vector recursive neural network (MVRNN) and achieves a F1 of 82.4 when POS, NER and WordNet features are used. In (Zeng et al., 2014), the authors present results for a CNN+Softmax classifier which employs lexical and sentencelevel features. Their classifier achieves a F1 of 82.7 when adding a handcrafted feature based on the WordNet. Yu et al. (2014) present the Factor631 based Compositional Embedding Model (FCM), which achieves a F1 of 83.0 by deriving sentencelevel and substructure embeddings from word embeddings utilizing dependency trees and named entities. As we can see in the last line of Table 5, CRCNN using the full sentence, word embeddings and WPEs outperforms all previous reported results and reaches a new state-of-the-art F1 of 84.1. This is a remarkable result since we do not use any complicated features that depend on external lexical resources such as WordNet and NLP tools such as named entity recognizers (NERs) and dependency parsers. We can see in Table 5 that CR-CNN1 also achieves the best result among the systems that use word embeddings as the only input features. The closest result (80.6), which is produced by the FCM system of Yu et al. (2014), is 2.2 F1 points behind CR-CNN result (82.8). 4.5 Most Representative Trigrams for each Relation In Table 6, for each relation type we present the five trigrams in the test set which contributed the most for scoring correctly classified examples. Remember that in CR-CNN, given a sentence x the score for the class c is computed by sθ(x)c = r⊺ x[W classes]c. In order to compute the most representative trigram of a sentence x, we trace back each position in rx to find the trigram responsible for it. For each trigram t, we compute its particular contribution for the score by summing the terms in score that use positions in rx that trace back to t. The most representative trigram in x is the one with the largest contribution to the improvement of the score. In order to create the results presented in Table 6, we rank the trigrams which were selected as the most representative of any sentence in decreasing order of contribution value. If a trigram appears as the largest contributor for more than one sentence, its contribuition value becomes the sum of its contribution for each sentence. We can see in Table 6 that for most classes, the trigrams that contributed the most to increase the score are indeed very informative regarding the relation type. As expected, different trigrams play an important role depending on the direction of the relation. For instance, the most informative tri1This is the result using only the text span between the target nouns. gram for Entity-Origin(e1,e2) is “away from the”, while reverse direction of the relation, EntityOrigin(e2,e1) or Origin-Entity, has “the source of” as the most informative trigram. These results are a step towards the extraction of meaningful knowledge from models produced by CNNs. 5 Related Work Over the years, various approaches have been proposed for relation classification (Zhang, 2004; Qian et al., 2009; Hendrickx et al., 2010; Rink and Harabagiu, 2010). Most of them treat it as a multiclass classification problem and apply a variety of machine learning techniques to the task in order to achieve a high accuracy. Recently, deep learning (Bengio, 2009) has become an attractive area for multiple applications, including computer vision, speech recognition and natural language processing. Among the different deep learning strategies, convolutional neural networks have been successfully applied to different NLP task such as part-of-speech tagging (dos Santos and Zadrozny, 2014), sentiment analysis (Kim, 2014; dos Santos and Gatti, 2014), question classification (Kalchbrenner et al., 2014), semantic role labeling (Collobert et al., 2011), hashtag prediction (Weston et al., 2014), sentence completion and response matching (Hu et al., 2014). Some recent work on deep learning for relation classification include Socher et al. (2012), Zeng et al. (2014) and Yu et al. (2014). In (Socher et al., 2012), the authors tackle relation classification using a recursive neural network (RNN) that assigns a matrix-vector representation to every node in a parse tree. The representation for the complete sentence is computed bottom-up by recursively combining the words according to the syntactic structure of the parse tree Their method is named the matrix-vector recursive neural network (MVRNN). Zeng et al. (2014) propose an approach for relation classification where sentence-level features are learned through a CNN, which has word embedding and position features as its input. In parallel, lexical features are extracted according to given nouns. Then sentence-level and lexical features are concatenated into a single vector and fed into a softmax classifier for prediction. This approach achieves state-of-the-art performance on the SemEval-2010 Task 8 dataset. Yu et al. (2014) propose a Factor-based Com632 Classifier Feature Set F1 SVM POS, prefixes, morphological, WordNet, dependency parse, 82.2 (Rink and Harabagiu, 2010) Levin classes, ProBank, FrameNet, NomLex-Plus, Google n-gram, paraphrases, TextRunner RNN word embeddings 74.8 (Socher et al., 2012) word embeddings, POS, NER, WordNet 77.6 MVRNN word embeddings 79.1 (Socher et al., 2012) word embeddings, POS, NER, WordNet 82.4 word embeddings 69.7 CNN+Softmax word embeddings, word position embeddings, 82.7 (Zeng et al., 2014) word pair, words around word pair, WordNet FCM word embeddings 80.6 (Yu et al., 2014) word embeddings, dependency parse, NER 83.0 CR-CNN word embeddings 82.8 word embeddings, word position embeddings 84.1 Table 5: Comparison with results published in the literature. Relation (e1,e2) (e2,e1) Cause-Effect e1 resulted in, e1 caused a, had caused e2 caused by, was caused by, are the, poverty cause e2, caused a e2 caused by, been caused by, e2 from e1 Component-Whole e1 of the, of the e2, part of the, e2 ’s e1, with its e1, e2 has a, in the e2, e1 on the e2 comprises the, e2 with e1 Content-Container was in a, was hidden in, were in a, e2 full of, e2 with e1, e2 was full, was inside a, was contained in e2 contained a, e2 with cold Entity-Destination e1 into the, e1 into a, e1 to the, was put inside, imported into the Entity-Origin away from the, derived from a, had the source of, e2 grape e1, left the, derived from an, e1 from the e2 butter e1 Instrument-Agency are used by, e1 for e2, is used by, with a e1, by using e1, e2 finds a, trade for e2, with the e2 e2 with a, e2 , who Member-Collection of the e2, in the e2, of this e2, e2 of e1, of wild e1, of elven e1, the political e2, e1 collected in e2 of different, of 0000 e1 Message-Topic e1 is the, e1 asserts the, e1 that the, described in the, discussed in the, on the e2, e1 inform about featured in numerous, discussed in cabinet, documented in two, Product-Producer e1 by the, by a e2, of the e2, e2 of the, e2 has constructed, e2 ’s e1, by the e2, from the e2 e2 came up, e2 who created Table 6: List of most representative trigrams for each relation type. positional Embedding Model (FCM) by deriving sentence-level and substructure embeddings from word embeddings, utilizing dependency trees and named entities. It achieves slightly higher accuracy on the same dataset than (Zeng et al., 2014), but only when syntactic information is used. There are two main differences between the approach proposed in this paper and the ones proposed in (Socher et al., 2012; Zeng et al., 2014; Yu et al., 2014): (1) CR-CNN uses a pair-wise ranking method, while other approaches apply multiclass classification by using the softmax function on the top of the CNN/RNN; and (2) CR-CNN employs an effective method to deal with artificial classes by omitting their embeddings, while other approaches treat all classes equally. 6 Conclusion In this work we tackle the relation classification task using a CNN that performs classification by ranking. The main contributions of this work are: (1) the definition of a new state-of-the-art for the SemEval-2010 Task 8 dataset without using any costly handcrafted features; (2) the proposal of a new CNN for classification that uses class embeddings and a new rank loss function; (3) an effective method to deal with artificial classes by omitting their embeddings in CR-CNN; (4) the demonstration that using only the text between target nominals is almost as effective as using WPEs; and (5) a method to extract from the CR-CNN model the most representative contexts of each relation type. Although we apply CR-CNN to relation classification, this method can be used for any classification task. 633 Acknowledgments The authors would like to thank Nina Wacholder for her valuable suggestions to improve the final version of the paper. References Yoshua Bengio. 2009. Learning deep architectures for ai. Foundations and Trends Machine Learning, 2(1):1–127. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference. R. Collobert, J. Weston, L. Bottou, M. Karlen, K. Kavukcuoglu, and P. Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. C´ıcero Nogueira dos Santos and Ma´ıra Gatti. 2014. Deep convolutional neural networks for sentiment analysis of short texts. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), Dublin, Ireland. C´ıcero Nogueira dos Santos and Bianca Zadrozny. 2014. Learning character-level representations for part-of-speech tagging. In Proceedings of the 31st International Conference on Machine Learning (ICML), JMLR: W&CP volume 32, Beijing, China. Jianfeng Gao, Patrick Pantel, Michael Gamon, Xiaodong He, and Li Deng. 2014. Modeling interestingness with deep neural networks. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Iris Hendrickx, Su Nam Kim, Zornitsa Kozareva, Preslav Nakov, Diarmuid ´O. S´eaghdha, Sebastian Pad´o, Marco Pennacchiotti, Lorenza Romano, and Stan Szpakowicz. 2010. Semeval-2010 task 8: Multi-way classification of semantic relations between pairs of nominals. In Proceedings of the 5th International Workshop on Semantic Evaluation, pages 33–38. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Proceedings of the Conference on Neural Information Processing Systems, pages 2042–2050. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural netork for modelling sentences. In Proceedings of the 52th Annual Meeting of the Association for Computational Linguistics, pages 655–665, Baltimore, Maryland. Yoon Kim. 2014. Convolutional neural networks for sentence classification. In Proceedings of the 2014 Conference on Empirical Methods for Natural Language Processing, pages 1746–1751, Doha, Qatar. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. In In Proceedings of Workshop at ICLR. Longhua Qian, Guodong Zhou, Fang Kong, and Qiaoming Zhu. 2009. Semi-supervised learning for semantic relation classification using stratified sampling strategy. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1437–1445. Bryan Rink and Sanda Harabagiu. 2010. Utd: Classifying semantic relations by combining lexical and semantic resources. In Proceedings of International Workshop on Semantic Evaluation, pages 256–259. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012. Semantic compositionality through recursive matrix-vector spaces. In Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 1201–1211. Kristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, pages 173–180. Jason Weston, Samy Bengio, and Nicolas Usunier. 2011. Wsabie: Scaling up to large vocabulary image annotation. In Proceedings of the Twenty-Second International Joint Conference on Artificial Intelligence, pages 2764–2770. Jason Weston, Sumit Chopra, and Keith Adams. 2014. #tagspace: Semantic embeddings from hashtags. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1822–1827. Mo Yu, Matthew Gormley, and Mark Dredze. 2014. Factor-based compositional embedding models. In Proceedings of the 2nd Workshop on Learning Semantics, Montreal, Canada. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In Proceedings of the 25th International Conference on Computational Linguistics (COLING), pages 2335–2344, Dublin, Ireland. Zhu Zhang. 2004. Weakly-supervised relation classification for information extraction. In Proceedings of the ACM International Conference on Information and Knowledge Management, pages 581–588, New York, NY, USA. 634
2015
61
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 635–644, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Semantic Representations for Domain Adaptation: A Case Study on the Tree Kernel-based Method for Relation Extraction Thien Huu Nguyen†, Barbara Plank§ and Ralph Grishman† † Computer Science Department, New York University, New York, NY 10003, USA § Center for Language Technology, University of Copenhagen, Denmark [email protected],[email protected],[email protected] Abstract We study the application of word embeddings to generate semantic representations for the domain adaptation problem of relation extraction (RE) in the tree kernelbased method. We systematically evaluate various techniques to generate the semantic representations and demonstrate that they are effective to improve the generalization performance of a tree kernel-based relation extractor across domains (up to 7% relative improvement). In addition, we compare the tree kernel-based and the feature-based method for RE in a compatible way, on the same resources and settings, to gain insights into which kind of system is more robust to domain changes. Our results and error analysis shows that the tree kernel-based method outperforms the feature-based approach. 1 Introduction Relation Extraction (RE) is an important aspect of information extraction that aims to discover the semantic relationships between two entity mentions appearing in the same sentence. Previous research on RE has followed either the kernelbased approach (Zelenko et al., 2003; Bunescu and Mooney, 2005; Zhao and Grishman, 2005; Zhang et al., 2006; Bunescu, 2007; Qian et al., 2008; Nguyen et al., 2009) or the feature-based approach (Kambhatla, 2004; Grishman et al., 2005; Zhou et al., 2005; Jiang and Zhai, 2007a; Chan and Roth, 2010; Sun et al., 2011). Usually, in such supervised machine learning systems, it is assumed that the training data and the data to which the RE system is applied to are sampled independently and identically from the same distribution. This assumption is often violated in reality and exemplified in the fact that the performance of the traditional RE techniques degrades significantly in such a domain mismatch case (Plank and Moschitti, 2013). To alleviate this performance loss, we need to resort to domain adaptation (DA) techniques to adapt a system trained on some source domain to perform well on new target domains. We here focus on the unsupervised domain adaptation (i.e., no labeled target data) and singlesystem DA (Petrov and McDonald, 2012; Plank and Moschitti, 2013), i.e., building a single system that is able to cope with different, yet related target domains. While DA has been investigated extensively in the last decade for various natural language processing (NLP) tasks, the examination of DA for RE is only very recent. To the best of our knowledge, there have been only three studies on DA for RE (Plank and Moschitti, 2013; Nguyen and Grishman, 2014; Nguyen et al., 2014). Of these, Nguyen et al. (2014) follow the supervised DA paradigm and assume some labeled data in the target domains. In contrast, Plank and Moschitti (2013) and Nguyen and Grishman (2014) work on the unsupervised DA. In our view, unsupervised DA is more challenging, but more realistic and practical for RE as we usually do not know which target domains we need to work on in advance, thus cannot expect to possess labeled data of the target domains. Our current work therefore focuses on the single-system unsupervised DA. Besides, note that this setting tries to construct a single system that can work robustly with different but related domains (multiple target domains), thus being different from most previous studies on DA (Blitzer et al., 2006; Blitzer et al., 2007) which have attempted to design a specialized system for every specific target domain. Plank and Moschitti (2013) propose to embed word clusters and latent semantic analysis (LSA) of words into tree kernels for DA of RE, while Nguyen and Grishman (2014) studies the appli635 cation of word clusters and word embeddings for DA of RE on the feature-based method. Although word clusters (Brown et al., 1992) have been employed by both studies to improve the performance of relation extractors across domains, the application of word embeddings (Bengio et al., 2003; Mnih and Hinton, 2008; Turian et al., 2010) for DA of RE is only examined in the feature-based method and never explored in the tree kernelbased method so far, giving rise to the first question we want to address in this paper: (i) Can word embeddings help the tree kernelbased methods on DA for RE and more importantly, in which way can we do it effectively? This question is important as word embeddings are real valued vectors, while the tree kernel-based methods rely on the symbolic matches or mismatches of concrete labels in the parse trees to compute the kernels. It is unclear at the first glance how to encode word embeddings into the tree kernels effectively so that word embeddings could help to improve the generalization performance of RE. One way is to use word embeddings to compute similarities between words and embed these similarity scores into the kernel functions, e.g., by resembling the method of Plank and Moschitti (2013) that exploited LSA (in the semantic syntactic tree kernel (SSTK), cf. §2.1). We explore various methods to apply word embeddings to generate the semantic representations for DA of RE and demonstrate that semantic representations are very effective to significantly improve the portability of the relation extractors based on the tree kernels, bringing us to the second question: (ii) Between the feature-based method in Nguyen and Grishman (2014) and the SSTK method in Plank and Moschitti (2013), which method is better for DA of RE, given the recent discovery of word embeddings for both methods? It is worth noting that besides the approach difference, these two works employ rather different resources and settings in their evaluation, making it impossible to directly compare their performance. In particular, while Plank and Moschitti (2013) only use the path-enclosed trees induced from the constituent parse trees as the representation for relation mentions, Nguyen and Grishman (2014) include a rich set of features extracted from multiple resources such as constituent trees, dependency trees, gazetteers, semantic resources in the representation. Besides, Plank and Moschitti (2013) consider the direction of relations in their evaluation (i.e, distinguishing between relation classes and their inverses) but Nguyen and Grishman (2014) disregard this relation direction. Finally, we note that although both studies evaluate their systems on the ACE 2005 dataset, they actually have different dataset partitions. In order to overcome this limitation, we conduct an evaluation in which the two methods are directed to use the same resources and settings, and are thus compared in a compatible manner to achieve an insight on their effectiveness for DA of RE. In fact, the problem of incompatible comparison is unfortunately very common in the RE literature (Wang, 2008; Plank and Moschitti, 2013) and we believe there is a need to tackle this increasing confusion in this line of research. Therefore, this is actually the first attempt to compare the two methods (tree kernel-based and feature-based) on the same settings. To ease the comparison for future work and circumvent the Zigglebottom pitfall (Pedersen, 2008), the entire setup and package is available.1 2 Relation Extraction Approaches In the following, we introduce the two relation extraction systems further examined in this study. 2.1 Tree kernel-based Method In the tree kernel-based method (Moschitti, 2006; Moschitti, 2008; Plank and Moschitti, 2013), a relation mention (the two entity mentions and the sentence containing them) is represented by the path-enclosed tree (PET), the smallest constituency-based subtree including the two target entity mentions (Zhang et al., 2006). The syntactic tree kernel (STK) is then defined to compute the similarity between two PET trees (where target entities are marked) by counting the common sub-trees, without enumerating the whole fragment space (Moschitti, 2006; Moschitti, 2008). STK is then applied in the support vector machines (SVMs) for RE. The major limitation of STK is its inability to match two trees that share the same substructure, but involve different though semantically related terminal nodes (words). This is caused by the hard matches between words, and consequently between sequences containing them. For instance, in the following example taken from Plank and Moschitti (2013), the two fragments “governor from Texas” and “head of Mary1https://bitbucket.org/nycphre/limo-re 636 land” would not match in STK although they have very similar syntactic structures and basically convey the same relationship. Plank and Moschitti (2013) propose to resolve this issue for STK using the semantic syntactic tree kernel (SSTK) (Bloehdorn and Moschitti, 2007) and apply it to the domain adaptation problem of RE. The two following techniques are utilized to activate the SSTK: (i) replace the part-ofspeech nodes in the PET trees by the new ones labeled by the word clusters of the corresponding terminals (words); (ii) replace the binary similarity scores between words (i.e, either 1 or 0) by the similarities induced from the latent semantic analysis (LSA) of large corpus. The former generalizes the part-of-speech similarity to the semantic similarity on word clusters; the latter, on the other hand, allows soft matches between words that have the same latent semantic but differ in symbolic representation. Both techniques emphasize the invariants of word semantics in different domains, thus being helpful to alleviate the vocabulary difference across domains. 2.2 Feature-based Method In the feature-based method (Zhou et al., 2005; Sun et al., 2011; Nguyen and Grishman, 2014), relation mentions are first transformed into rich feature vectors that capture various characteristics of the relation mentions (i.e, lexicon, syntax, semantics etc). The resulting vectors are then fed into the statistical classifiers such as Maximum Entropy (MaxEnt) to perform classification for RE. The main reason for the performance loss of the feature-based systems on new domains is the behavioral changes of the features when domains shift. Some features might be very informative in the source domain but become less relevant in the target domains. For instance, some words, that are very indicative in the source domain might not appear in the target domains (lexical sparsity). Consequently, the models putting high weights on such words (features) in the source domain will fail to perform well on the target domains. Nguyen and Grishman (2014) address this problem for the feature-based method in DA of RE by introducing word embeddings as additional features. The rationale is based on the fact that word embeddings are low dimensional and real valued vectors, capturing latent syntactic and semantic properties of words (Bengio et al., 2003; Mnih and Hinton, 2008; Turian et al., 2010). The embeddings of symbolically different words are often close to each other if they have similar semantic and syntactic functions. This again helps to mitigate the lexical sparsity or the vocabulary difference between the domains and has proven helpful for, amongst others, the feature-based method in DA of RE. 2.3 Tree Kernel-based vs Feature-based The feature-based method explicitly encapsulates the linguistic intuition and domain expertise for RE into the features, while the tree kernel-based method avoids the complicated feature engineering and implicitly encode the features into the computation of the tree kernels. Which method is better for DA of RE? In order to ensure the two methods (Plank and Moschitti, 2013; Nguyen and Grishman, 2014) are compared compatibly on the same resources, we make sure the two systems have access to the same amount of information. Thus, we follow Plank and Moschitti (2013) and use the PET trees (beside word clusters and word embeddings) as the only resource the two methods can exploit. For the feature-based method, we utilize all the features extractable from the PET trees that are standardly used in the state-of-the-art featurebased systems for DA of RE (Nguyen and Grishman, 2014). Specifically, the feature set employed in this paper (denoted by FET) includes: the lexical features, i.e., the context words, the head words, the bigrams, the number of words, the lexical path, the order of mention (Zhou et al., 2005; Sun et al., 2011); and the syntactic features, i.e., the path connecting the two mentions in PET and the unigrams, bigrams, trigrams along this path (Zhou et al., 2005; Jiang and Zhai, 2007a). Hypothesis: Assuming identical settings and resources, we hypothesize that the tree kernelbased method is better than the feature-based method for DA of RE. This is motivated because of at least two reasons: (i) the tree kernel-based method implicitly encodes a more comprehensive feature set (involving all the sub-trees in the PETs), thus potentially captures more domainindependent features to be useful for DA of RE; (ii) the tree kernel-based method avoids the inclusion of fine-tuned and domain-specific features originated from the excessive feature engineering (i.e., hand-designing feature sets based on the 637 linguistic intuition for specific domains) of the feature-based method. 3 Word Embeddings & Tree Kernels In this section, we first give the intuition that guides us in designing the proposed methods. In particular, one limitation of the syntactic semantic tree kernel presented in Plank and Moschitti (2013) (§2.1) is that semantics is highly tied to syntax (the PET trees) in the kernel computation, limiting the generalization capacity of semantics to the extent of syntactic matches. If two relation mentions have different syntactic structures, the two relation mentions will not match, although they share the same semantic representation and express the same relation class. For instance, the two fragments “Tom is the CEO of the company” and “the company, headed by Tom” express the same relationship between “Tom” and “company” based on the semantics of their context words, but cannot be matched in SSTK as their syntactic structures are different. In such a case, it is desirable to have a representation of relation mentions that is grounded on the semantics of the context words and reflects the latent semantics of the whole relation mentions. This representation is expected to be general enough to be effective on different domains. Once the semantic representation of relation mentions is established, we can use it in conjunction with the traditional tree kernels to extend their coverage. The benefit is mutual as both semantics and syntax help to generalize relation mentions to improve the recall, but also constrain each other to support precision. This is the basic idea of our approach, which we compare to the previous methods. 3.1 Methods We propose to utilize word embeddings of the context words as the principal components to obtain semantic representations for relation mentions in the tree kernel-based methods. Besides more traditional approaches to exploit word embeddings, we investigate representations that go beyond the word level and use compositionality embeddings for domain adaptation for the first time. In general, suppose we are able to acquire an additional real-valued vector Vi from word embeddings to semantically represent a relation mention Ri (along with the PET tree Ti), leading to the new representation of Ri = (Ti, Vi). The new kernel function in this case is then defined by: Knew(Ri, Rj) = (1 −α)SSTK(Ti, Tj) + αKvec(Vi, Vj) where Kvec(Vi, Vj) is some standard vector kernel like the polynomial kernels. α is a trade-off parameter and indicates whether the system attributes more weight to the traditional SSTK or the new semantic kernel Kvec. In this work, we consider the following methods to obtain the semantic representation Vi from the word embeddings of the context words of Ri (assuming d is the dimensionality of the word embeddings): HEAD: Vi = the concatenation of the word embeddings of the two entity mention heads of Ri. This representation is inherited from Nguyen and Grishman (2014) that only examine embeddings at the word level separately for the feature-based method without considering the compositionality embeddings of relation mentions. The dimensionality of HEAD is 2d. According to the principle of compositionality (Werning et al., 2006; Baroni and Zamparelli, 2010; Paperno et al., 2014), the meaning of a complex expression is determined by the meanings of its components and the rules to combine them. We study the following two compositionality embeddings for relation mentions that can be generated from the embeddings of the context words: PHRASE: Vi = the mean of the embeddings of the words contained in the PET tree Ti of Ri. Although this composition is simple, it is in fact competitive to the more complicated methods based on recursive neural networks (Socher et al., 2012b; Blacoe and Lapata, 2012; Sterckx et al., 2014) on representing phrase semantics. TREE: This is motivated by the training of recursive neural networks (Socher et al., 2012a) for semantic compositionality and attempts to aggregate the context words embeddings syntactically. In particular, we compute an embedding for every node in the PET tree in a bottom-up manner. The embeddings of the leaves are the embeddings of the words associated with them while the embeddings of the internal nodes are the means of the embeddings of their children nodes. We use the embeddings of the root of the PET tree to represent the relation mention in this case. Both PHRASE and TREE have d dimensions. It is also interesting to examine combinations of these three representations (cf., Table 1). 638 SIM: Finally, for completeness, we experiment with a more obvious way to introduce word embeddings into tree kernels, resembling more closely the approach of Plank and Moschitti (2013). In particularly, the SIM method simply replaces the similarity scores between word pairs obtained from LSA by the cosine similarities between the word embeddings to be used in the SSTK kernel. 4 Experiments 4.1 Dataset, Resources and Parameters We use the word clusters trained by Plank and Moschitti (2013) on the ukWaC corpus (Baroni et al., 2009) with 2 billion words, and the C&W word embeddings from Turian el al. (2010)2 with 50 dimensions following Nguyen and Grishman (2014). In order to make the comparisons compatible, we introduce word embeddings into the tree kernel by extending the package provided by Plank and Moschitti (2013), which uses the Charniak parser to obtain the constituent trees, the SVM-light-TK for the SSTK kernel in SVM, the directional relation classes, etc. We utilize the default vector kernel in the SVM-light-TK package (d=3). For the feature-based method, we apply the MaxEnt classifier in the MALLET3 package with the L2 regularizer on the hierarchical architecture for relation extraction as in Nguyen and Grishman (2014). Following prior work, we evaluate the systems on the ACE 2005 dataset which involves 6 domains: broadcast news (bn), newswire (nw), broadcast conversation (bc), telephone conversation (cts), weblogs (wl) and usenet (un). The union of bn and nw (news) is used as the source domain while bc, cts and wl play the role of the target domains. We take half of bc as the only target development set, and use the remaining data and domains for testing. The dataset partition is exactly the same as in Plank and Moschitti (2013). As described in their paper, the target domains quite differ from the source domain in the relation distributions and vocabulary. 4.2 Word Embeddings for Tree Kernel We investigate the effectiveness of different semantic representations (§3.1) in tree kernels by 2http://metaoptimize.com/projects/wordreprs/ 3http://mallet.cs.umass.edu/ 0 0.1 0.3 0.5 0.7 0.9 1 46 48 50 52 α F-measure Figure 1: α vs F-measure on PET+HEAD+PHRASE taking the PET tree as the baseline4, and evaluate the performance of the representations when combined with the baseline on the bc development set. Method P R F1 PET (Plank and Moschitti, 2013) 52.2 41.7 46.4 PET+SIM 39.4 37.2 38.3 PET+HEAD 60.4 44.9 51.5 PET+PHRASE 58.4 40.7 48.0 PET+TREE 59.8 42.2 49.5 PET+HEAD+PHRASE 63.2 46.2 53.4 PET+HEAD+TREE 61.0 45.7 52.3 PET+PHRASE+TREE 59.2 42.4 49.4 PET+HEAD+PHRASE+TREE 60.8 45.2 51.9 Table 1: Performance on the bc dev set for PET. Best combination (HEAD+PHRASE) is denoted WED in Table 2 Table 1 shows the results. The main conclusions include: (i) The substitution of LSA similarity scores with the word embedding cosine similarities (SIM) does not help to improve the performance of the tree kernel method. (ii) When employed independently, both the word level embeddings (HEAD) and the compositionality embeddings (PHRASE, TREE) are effective for the tree kernel-based method on DA for RE, showing a slight advantage for HEAD. (iii) Thus, the compositionality embeddings PHRASE and TREE seem to capture different information with respect to the word level embeddings HEAD. We expect the combination of HEAD with either PHRASE or TREE to further improve performance. This is the case when adding one of them at a time. PHRASE and TREE seem to capture similar information, combining all (last row in Table 1) is not the overall best system. The best performance is achieved when the HEAD and PHRASE embeddings are utilized at 4By using their system we obtained the same results. 639 nw+bn (in-dom.) bc cts wl # System: P: R: F1: P: R: F1: P: R: F1: P: R: F1: 1 PET (Plank and Moschitti, 2013) 50.6 42.1 46.0 51.2 40.6 45.3 51.0 37.8 43.4 35.4 32.8 34.0 2 PET+WED 55.8 48.7 52.0 57.3 45.7 50.8 54.0 38.1 44.7 40.1 36.5 38.2 3 PET WC 55.4 44.6 49.4 54.3 41.4 47.0 55.9 37.1 44.6 40.0 32.7 36.0 4 PET WC+WED 56.3 48.2 51.9 57.0 44.3 49.8 56.1 38.1 45.4 40.7 36.1 38.2 5 PET LSA 52.3 44.1 47.9 51.4 41.7 46.0 49.7 36.5 42.1 38.1 36.5 37.3 6 PET LSA+WED 55.2 48.5 51.6 58.8 45.8 51.5 54.1 38.1 44.7 40.9 38.5 39.6 7 PET+PET WC 55.0 46.5 50.4 54.4 43.4 48.3 54.1 38.1 44.7 38.4 34.5 36.3 8 PET+PET WC+WED 56.3 50.3 53.1 57.5 46.6 51.5 55.6 39.8 46.4 41.5 37.9 39.6 9 PET+PET LSA 52.7 46.6 49.5 53.9 45.2 49.2 49.9 37.6 42.9 37.9 38.3 38.1 10 PET+PET LSA+WED 55.5 49.9 52.6 56.8 45.8 50.8 52.5 38.6 44.5 41.6 39.3 40.5 11 PET+PET WC+PET LSA 55.1 45.9 50.1 55.3 43.1 48.5 53.1 37.0 43.6 39.9 35.8 37.8 12 PET+PET WC+PET LSA+WED 55.0 48.8 51.7 58.5 47.3 52.3 52.6 38.8 44.7 42.3 38.9 40.5 Table 2: In-domain (first column) and out-of-domain performance (columns two to four) on ACE 2005. Systems of the rows not in gray come from Plank and Moschitti (2013) (the baselines). WED means HEAD+PHRASE. the same time, reaching an F1 of 53.4% (compared to 46.4% of the baseline) on the development set. The results in Table 1 are obtained using the trade-off parameter α = 0.7. Figure 1 additionally shows the variation of the performance with changing α (for the best system on dev, i.e., for the representation PET+HEAD+PHRASE). As we can see, the performance for α > 0.5 is in general better, suggesting a preference for the semantic representation over the syntactic representation in DA for RE. The performance reaches its peak when the suitable amounts of semantics and syntax are combined (i.e, α = 0.7). In the following experiments, we use the embedding combination (HEAD+PHRASE) with α = 0.7 for the tree kernels, denoted WED. 4.3 Domain Adaptation Experiments In this section, we examine the semantic representation for DA of RE in the tree kernelbased method. In particular, we take the systems using the PET trees, word clusters and LSA in Plank and Moschitti (2013) as the baselines and augment them with the embeddings WED = HEAD+PHRASE. We report the performance of these augmented systems in Table 2 for the two scenarios: (i) in-domain: both training and testing are performed on the source domain via 5-fold cross validation and (ii) out-of-domain: models are trained on the source domain but evaluated on the three target domains. To summarize, we find: First, word embeddings seem to subsume word clusters in the tree kernel-based method (comparing rows 2 and 4, and except domain cts) while word embeddings and LSA actually encode different information (comparing rows 2 and 6 for the out-of-domain experiments) and their combination would be helpful for DA of RE. Second, regarding composite kernels, given word embeddings, the addition of the baseline kernel (PET) is in general useful for the augmented kernels PET WC and PET LSA (comparing rows 4 and 8, rows 6 and 10) although it is less pronounced for PET LSA. Third and most importantly, for all the systems in Plank and Moschitti (2013) (the baselines) and for all the target domains, whether word clusters and LSA are utilized or not, we consistently witness the performance improvement of the baselines when combined with word embedding (comparing systems X and X+WED where X is some baseline system). The best out-of-domain performance is achieved when word embeddings are employed in conjunction with the composite kernels (PET+PET WC+PET LSA for the target domains bc and wl, and PET+PET WC for the target domain cts). To be more concrete, the best system with word embeddings (row 12 in Table 2) significantly outperforms the best system in Plank and Moschitti (2013) with p < 0.05, an improvement of 3.7%, 1.1% and 2.7% on the target domains bc, cts and wl respectively, demonstrating the benefit of word embeddings for DA of RE in the tree kernel-based method. 4.4 Tree Kernel-based vs Feature-based DA of RE This section aims to compare the tree kernel-based method in Plank and Moschitti (2013) and the feature-based method in Nguyen and Grishman (2014) for DA of RE on the same settings (i.e, same dataset partition, the same pre-processing 640 nw+bn (in-dom.) bc cts wl System: P: R: F1: P: R: F1: P: R: F1: P: R: F1: Tree kernel-based: PET+PET WC+HEAD+PHRASE 56.3 50.3 53.1 57.5 46.6 51.5 55.6 39.8 46.4 41.5 37.9 39.6 Feature-based: FET+WC+HEAD 44.5 51.0 47.5 46.5 49.3 47.8 44.5 40.0 42.1 35.4 39.5 37.3 FET+WC+TREE 44.4 50.2 47.1 46.4 48.7 47.6 43.7 40.3 41.9 32.7 36.7 34.6 FET+WC+HEAD+PHRASE 44.9 51.6 48.0 46.0 49.1 47.5 45.2 41.5 43.3 34.7 39.2 36.8 FET+WC+HEAD+TREE 45.1 51.0 47.8 46.9 48.4 47.6 43.8 39.5 41.5 34.7 38.8 36.6 Table 3: Tree kernel-based in Plank and Moschitti (2013) vs feature-based in Nguyen and Grishman (2014). All the comparisons between the tree kernel-based method and the feature-based method in this table are significant with p < 0.05. procedure, the same model of directional relation classes, the same PET trees for tree kernels and feature extraction, the same word clusters and the same word embeddings). We first evaluate the feature-based system with different combinations of embeddings (i.e, HEAD, PHRASE and TREE) on the bc development set. Based on the evaluation results, we then discuss the effect of the semantic representations on the feature-based system and the tree kernel-based system, and then compare the performance of the two methods when they are augmented with their best corresponding embedding combinations. System P R F1 B 51.2 49.4 50.3 B+HEAD 55.8 52.4 54.0 B+PHRASE 50.7 46.2 48.4 B+TREE 53.6 51.1 52.3 B+HEAD+PHRASE 53.2 50.1 51.6 B+HEAD+TREE 54.9 51.4 53.1 B+PHRASE+TREE 50.7 48.4 49.5 B+HEAD+PHRASE+TREE 52.7 49.4 51.0 Table 4: Performance of the feature-based method (dev). Table 4 presents the evaluation results on the bc development for the feature-based system where B is the baseline feature set consisting of FET and word clusters (WC) (Nguyen and Grishman, 2014). The Role of Semantic Representations Considering Table 4 for the feature-based method and Table 1 for the tree kernel-based method, we see that when combined with the HEAD embeddings, the compositionality embedding TREE is more effective for the feature-based method, in contrast to the tree kernel-based method, where the PHRASE embeddings are better. This can be partly explained by the fact that the tree kernel-based method emphasizes the syntactic structure of the relation mentions, while the feature-based method exploits the sequential structure more. Consequently, the syntactic semantics of TREE are more helpful for the feature-based method, whereas the sequential semantics of PHRASE are more useful for the tree kernel-based method. Performance Comparison The three best embedding combinations for the feature-based system in Table 4 are (listed by performance order): (HEAD), (HEAD+TREE) and (TREE), where (HEAD) is also the best word level method employed in Nguyen and Grishman (2014). In order to enable a fairer and clearer evaluation, when doing comparison, we use both the three best embedding combinations in the featurebased method and the best embedding combination (HEAD+PHRASE) in the tree kernel-based method. In the tree kernel-based method, we do not employ the LSA information as it comes in the form of similarity scores between pairs of words, and it is not clear how to encode this information into the feature-based method effectively. Finally, we utilize the composite kernel for its demonstrated effectiveness in Section 4.3. The most important observation from the experimental results (shown in Table 3) is that over all the target domains, the tree kernel-based system is significantly better than the feature-based systems with p < 0.05 (assuming the same resources and settings mentioned above). In fact, there are large margins between the tree kernelbased and the feature-based methods in this case (i.e, about 3.7% for bc, 3.1% for cts and 2.3% for wl), clearly confirming the hypothesis about the advantage of the tree kernel-based method over the feature-based method on DA for RE in Section 2.3. 5 Analysis This section analyzes the output of the systems to gain more insights into their operation. 641 Word Embeddings for the Tree-kernel based Method We focus on the comparison of the best model in Plank and Moschitti (2013) (row 11 in Table 2) (called P) with the same model but augmented with the embedding WED (row 12 in Tabel 2) (called P+WED). One of the most interesting insights is that the embedding WED helps to semantically generalize the phrases connecting the two target entity mentions beyond the syntactic constraints. For instance, model P fails to discover the relation between “Chuck Hagel” and “Vietnam” in the sentence (of the target domain bc): “Sergeant Chuck Hagel was seriously wounded twice in Vietnam.” (i.e, it returns the NONE relation as the prediction) as the substructure associated with “seriously wounded twice” does not appear with any relation in the source domain. Model P+WED, on the other hand, correctly predicts the PHYS (Located) relation between the two entities as the PHRASE embedding of “Chuck Hagel was seriously wounded twice in Vietnam.” (phrase X1) is very close to the embedding of the source domain phrase: “Stewart faces up to 30 years in prison” (phrase X2) (annotated with the PHYS relation between “Stewart” and “prison”). In fact, X2 is only the 9th closest phrase in the source domain of X1. The closest phrase of X1 in the source domain is X3: the phrase between “Iraqi soldiers” and “herself” in the sentence “The Washington Post is reporting she shot several Iraqi soldiers before she was captured and she was shot herself, too.”. However, as the syntactical structure of X1 is more similar to X2’s, and is remarkably different from X3 as well as the other closest phrases (ranked from 2nd to 8th), the new kernel function Knew would still prefer X2 due to its trade-off between syntax and semantics. Tree Kernel-based vs Feature-based From the analysis of the systems in Table 3, we find that, among others, the tree kernel-based method improves the precision significantly via the semantic and syntactic refinement it maintains. Let us consider the following phrase of the target domain bc: “troops have dislodged stubborn Iraqi soldiers” (called Y1). The feature-based systems in Table 3 incorrectly predict the ORG-AFF relation (Employment or Membership) between “Iraqi soldiers” and “troops”. This is mainly due to the high weights of the features linking the words “troop” and “soldiers” with the relation type ORG-AFF in the feature-based models, which is, in turn, originated from the high correlation of these words and the relation type in the training data of the source domain (domain bias). The tree kernelbased model in Table 3 successfully recognizes the NONE relation in this case. A closer examination shows that the phrase with the closest embedding to Y1 in the source domain is Y2: “Iraqi soldiers abandoned their posts”,5 which is annotated with the NONE relation between “Iraqi soldiers” and “their posts”. As the syntactic structure of Y2 is also very similar to Y1, it is not surprising that Y1 is closest to Y2 in the new kernel function, consequently helping the tree kernel-based method work correctly in this case. 6 Related work Word embeddings are only applied to RE recently. Socher et al. (2012b) use word embeddings as input for matrix-vector recursive neural networks in relation classification while Zeng et al. (2014), and Nguyen and Grishman (2015) employ word embeddings in the framework of convolutional neural networks for relation classification and extraction, respectively. Sterckx et al. (2014) utilize word embeddings to reduce noise of training data in distant supervision. Kuksa et al. (2010) present a string kernel for bio-relation extraction with word embeddings, and Yu et al. (2014; 2015) study the factor-based compositional embedding models. However, none of this work examines word embeddings for tree kernels as well as domain adaptation as we do. Regarding DA, in the unsupervised DA setting, Huang and Yates (2010) attempt to learn multidimensional feature representations while Blitzer et al. (2006) introduce structural correspondence learning. Daum´e (2007) proposes an easy adaptation framework (EA) while Xiao and Guo (2013) present a log-bilinear language adaptation technique in the supervised DA setting. Unfortunately, all of this work assumes some prior (in the form of either labeled or unlabeled data) on the target domains for the sequential labeling tasks, in contrast to our single-system unsupervised DA setting for relation extraction. An alternative method that is also popular to DA is instance weighting (Jiang and Zhai, 2007b). However, as shown by Plank and Moschitti (2013), instance weighting is not 5The full sentence is: “After today’s air strikes, Iraqi soldiers abandoned their posts and surrendered to Kurdish fighters.”. 642 very useful for DA of RE. 7 Conclusion In order to improve the generalization of relation extractors, we propose to augment the semantic syntactic tree kernels with the semantic representation of relation mentions, generated from the word embeddings of the context words. The method demonstrates strong promise for the DA of RE, i.e, it significantly improves the best system of Plank and Moschitti (2013) (up to 7% relative improvement). Moreover, we perform a compatible comparison between the tree kernel-based method and the feature-based method on the same settings and resources, which suggests that the tree kernel-based method (Plank and Moschitti, 2013) is better than the feature-based method (Nguyen and Grishman, 2014) for DA of RE. An error analysis is conducted to get a deeper comprehension of the systems. Our future plan is to investigate other syntactic and semantic structures (such as dependency trees, abstract meaning representation etc) for DA of RE, as well as continue the comparison between the kernel-based method and the featurebased method when they are allowed to exploit more resources. References Marco Baroni and Roberto Zamparelli. 2010. Nouns are vectors, adjectives are matrices: Representing adjective-noun constructions in semantic space. In EMNLP. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky wide web: a collection of very large linguistically processed web-crawled corpora. In Language Resources and Evaluation, pages 209–226. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. In Journal of Machine Learning Research 3, pages 1137–1155. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In EMNLP. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In EMNLP. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In ACL. Stephan Bloehdorn and Alessandro Moschitti. 2007. Exploiting Structure and Semantics for Expressive Text Kernels. In CIKM. Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vincent J. Della Pietra, and Jenifer C. Lai. 1992. Class-based n-gram models of natural language. In Computational Linguistics, pages 467–479. Razvan C. Bunescu and Raymond J. Mooney. 2005. A shortest path dependency kernel for relation extraction. In EMNLP. Razvan C. Bunescu. 2007. Learning to extract relations from the web using minimal supervision. In ACL. Yee Seng Chan and Dan Roth. 2010. Exploiting background knowledge for relation extraction. In COLING. Hal Daume. 2007. Frustratingly easy domain adaptation. In ACL. Ralph Grishman, David Westbrook, and Adam Meyers. 2005. Nyu’s english ace 2005 system description. In The ACE 2005 Evaluation Workshop. Fei Huang and Alexander Yates. 2010. Exploring representation-learning approaches to domain adaptation. In The ACL Workshop on Domain Adaptation for Natural Language Processing (DANLP). Jing Jiang and ChengXiang Zhai. 2007a. A systematic exploration of the feature space for relation extraction. In NAACL-HLT. Jing Jiang and ChengXiang Zhai. 2007b. Instance weighting for domain adaptation in nlp. In ACL. Nanda Kambhatla. 2004. Combining lexical, syntactic, and semantic features with maximum entropy models for information extraction. In ACL. Pavel Kuksa, Yanjun Qi, Bing Bai, Ronan Collobert, Jason Weston, Vladimir Pavlovic, and Xia Ning. 2010. Semi-supervised abstractionaugmented string kernel for multi-level bio-relation extraction. In ECML PKDD. Andriy Mnih and Geoffrey Hinton. 2008. A scalable hierarchical distributed language model. In NIPS. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In ECML. Alessandro Moschitti. 2008. Kernel methods, syntax and semantics for relational text categorization. In CIKM. Thien Huu Nguyen and Ralph Grishman. 2014. Employing word representations and regularization for domain adaptation of relation extraction. In ACL. 643 Thien Huu Nguyen and Ralph Grishman. 2015. Relation extraction: Perspective from convolutional neural networks. In The NAACL Workshop on Vector Space Modeling for NLP (VSM). T. Truc-Vien Nguyen, Alessandro Moschitti, and Giuseppe Riccardi. 2009. Convolution kernels on constituent, dependency and sequential structures for relation extraction. In EMNLP. Luan Minh Nguyen, W. Ivor Tsang, A. Kian Ming Chai, and Leong Hai Chieu. 2014. Robust domain adaptation for relation extraction via clustering consistency. In ACL. Denis Paperno, The Nghia Pham, and Marco Baroni. 2014. A practical and linguistically-motivated approach to compositional distributional semantics. In ACL. Ted Pedersen. 2008. Empiricism is not a matter of faith. In Computational Linguistics 3, pages 465– 470. Slav Petrov and Ryan McDonald. 2012. Overview of the 2012 shared task on parsing the web. In The First Workshop on Syntactic Analysis of Non-Canonical Language (SANCL). Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In ACL. Longhua Qian, Guodong Zhou, Fang Kong, Qiaoming Zhu, and Peide Qian. 2008. Exploiting constituent dependencies for tree kernel-based semantic relation extraction. In COLING. Richard Socher, Alex Perelygin, Jean Y. Wu, Jason Chuang, Christopher D. Manning, Andrew Y. Ng, and Christopher Potts. 2012a. Recursive deep models for semantic compositionality over a sentiment treebank. In EMNLP-CoNLL. Richard Socher, Brody Huval, Christopher D. Manning, and Andrew Y. Ng. 2012b. Semantic compositionality through recursive matrix-vector spaces. In EMNLP. Lucas Sterckx, Thomas Demeester, Johannes Deleu, and Chris Develder. 2014. Using active learning and semantic clustering for noise reduction in distant supervision. In AKBC. Ang Sun, Ralph Grishman, and Satoshi Sekine. 2011. Semi-supervised relation extraction with large-scale word clustering. In ACL. Joseph Turian, Lev-Arie Ratinov, and Yoshua Bengio. 2010. Word representations: A simple and general method for semi-supervised learning. In ACL. Mengqiu Wang. 2008. A re-examination of dependency path kernels for relation extraction. In IJCNLP. Markus Werning, Edouard Machery, and Gerhard Schurz. 2006. Compositionality of meaning and content: Foundational issues (linguistics & philosophy). In Linguistics & philosophy. Min Xiao and Yuhong Guo. 2013. Domain adaptation for sequence labeling tasks with a probabilistic language adaptation model. In ICML. Mo Yu, Matthew Gormley, and Mark Dredze. 2014. Factor-based compositional embedding models. In The NIPS workshop on Learning Semantics. Mo Yu, Matthew Gormley, and Mark Dredze. 2015. Combining word embeddings and feature embeddings for fine-grained relation extraction. In NAACL. Dmitry Zelenko, Chinatsu Aone, and Anthony Richardella. 2003. Kernel methods for relation extraction. In Journal of Machine Learning Research 3, pages 1083–1106. Daojian Zeng, Kang Liu, Siwei Lai, Guangyou Zhou, and Jun Zhao. 2014. Relation classification via convolutional deep neural network. In COLING. Min Zhang, Jie Zhang, Jian Su, and Guodong Zhou. 2006. A composite kernel to extract relations between entities with both flat and structured features. In COLING-ACL. Shubin Zhao and Ralph Grishman. 2005. Extracting relations with integrated information using kernel methods. In ACL. GuoDong Zhou, Jian Su, Jie Zhang, and Min Zhang. 2005. Exploring various knowledge in relation extraction. In ACL. 644
2015
62
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 645–655, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Omnia Mutantur, Nihil Interit: Connecting Past with Present by Finding Corresponding Terms across Time Yating Zhang*, Adam Jatowt*, Sourav S Bhowmick+, Katsumi Tanaka* *School of Informatics, Kyoto University +School of Computer Engineering, Nanyang Technological University {zhang,adam,tanaka}@dl.kuis.kyoto-u.ac.jp [email protected] Abstract In the current fast-paced world, people tend to possess limited knowledge about things from the past. For example, some young users may not know that Walkman played similar function as iPod does nowadays. In this paper, we approach the temporal correspondence problem in which, given an input term (e.g., iPod) and the target time (e.g. 1980s), the task is to find the counterpart of the query that existed in the target time. We propose an approach that transforms word contexts across time based on their neural network representations. We then experimentally demonstrate the effectiveness of our method on the New York Times Annotated Corpus. 1 Introduction What music device 30 years ago played similar role as iPod does nowadays? Who are today’s Beatles? Who was a counterpart of President Chirac in 1988? These and many other similar questions may be difficult to answer by average users (especially, by young ones). This is because people tend to possess less knowledge about the past than about the contemporary time. In this work we propose an effective method to solve the problem of finding counterpart terms across time. In particular, for an input pair of a term (e.g., iPod) and the target time (e.g. 1980s), we find the corresponding term that existed in the target time (walkman). We consider temporal counterparts to be terms which are semantically similar, yet, which existed in different time. Knowledge of temporal counterparts can help to alleviate the problem of terminology gap for users searching within temporal document collections such as archives. For example, given a user’s query and the target time frame, a new modified query that represents the same meaning could be suggested to improve search results. Essentially, it would mean letting searchers use the knowledge they possess on the current world to perform search within unknown collections such as ones containing documents from the distant past. Furthermore, solving temporal correspondence problem can help timeline construction, temporal summarization, reference forecasting and can have applications in education. The problem of temporal counterpart detection is however not trivial. The key difficulty comes from the change of the entire context that results in low overlap of context across time. In other words, it is difficult to find temporal counterpart terms by directly comparing context vectors across time. This fact is nicely portrayed by the Latin proverb: “omnia mutantur, nihil interit” (in English: “everything changes, nothing perishes”) which indicates that there are no completely static things, yet, many things and concepts are still similar across time. Another challenge is the lack of training data. If we have had enough training pairs of input terms and their temporal counterparts, then it would have become possible to represent the task as a typical machine learning problem. However, it is difficult to collect multiple training pairs over various domains and for arbitrary time. In view of the challenges mentioned above, we propose an approach that transforms term representations from one vector space (e.g., one derived from the present documents) to another vector space (e.g., one obtained from the past documents). Terms in both the vector spaces are represented by the distributed vector representation (Mikolov et al. 2013a; Mikolov et al. 2013c). Our method then matches the terms by comparing their relative positions in the vector spaces of different time periods alleviating the problem of low overlap between word contexts over time. It also does not require to manually prepare seed pairs of temporal counterparts. We further improve this method by automatically generating reference points that more precisely represent target terms in the form of local graphs. In result, our approach consists of finding global and local correspondence between terms over time. 645 To sum up, we make the following contributions in this paper: (1) we propose an efficient method to find temporal counterparts by transforming the representation of terms within different temporal spaces, (2) we then enhance the global correspondence method by considering also the local context of terms (local correspondence) and (3) we perform extensive experiments on the New York Times Annotated Corpus (Sandhaus, 2008), including the search from the present to the past and vice versa, which prove the effectiveness of our approach. 2 Global Correspondence Across Time Let the base time denoted as TB mean the time period associated with the input term and let the target time, TT, mean the time period in which we want to find this term’s counterparts. Typically, for users, the base time is the present time and the target time is some selected time period in the past. Note however, that we do not impose any restriction on the order and the distance of the both times. Hence, it is possible to search for present counterparts of terms that existed in the past. In our approach we first represent all the terms in the base time and in the target time within their respective semantic vector spaces, χB and χT. Then, we construct a transformation matrix to bridge the two vector spaces. Algorithm 1 summarizes the procedures needed to compute the global transformation. We will explain it in Section 2.1 and 2.2. Algorithm 1 Overview of Global Transformation Input: query q, base time TB and target time TT 1. Construct word representation model for corpus in the base time, D(TB), and in the target time, D(TT). (Section 2.1) 2. Construct transformation matrix M between D(TB) and D(TT) by first collecting CFTs as training pairs and then learning M using Eq. 1. (Section 2.2) 3. Rank the words in target time by their correspondence scores (Eq. 2) Output: ranked list of temporal counterparts 2.1 Vector space word representations Distributed representation of words by neural network was first proposed by Rumelhart et al. (1986). More recently, Mikolov et al. (2013a, 2013c) introduced the Skip-gram model which utilizes a simplified neural network architecture for learning vector representations of words from unstructured text data. We apply this model due to its advantages: (1) it can capture precise semantic word relationships; (2) due to the simplified neural network architecture, the model can easily scale to millions of words. After applying the Skip-gram model, the documents in the base time, D(TB), are converted to a m×p matrix where n is the vocabulary size and p are the dimensions of feature vectors. Similarly, the documents in the target time, D(TT), are represented as a n×q matrix (as shown in Fig. 1). Figure 1: Word vector representations for the base and the target time. 2.2 Transformation across vector spaces Our goal is to compare words in the base time and the target time in order to find temporal counterparts. However, it is impossible to directly compare words in two different semantic vector spaces, as the features in both spaces have no direct correspondence between each other (as can be seen in Fig. 1). To solve this problem, we propose to train a transformation matrix in order to build the connection between different vector spaces. The key idea is that the relative positions of words in each vector space should remain more or less stable. In other words, a temporal counterpart term should have similar relative position in its own vector space as the position of the queried term in the base time space. Fig. 2 conceptually portrays this idea as the correspondence between the context of Walkman and the context of iPod (only two dimensions are shown for simplicity). Figure 2: Conceptual view of the across-time transformation by matching similar relative geometric positions in each space. Our task is then to train the transformation matrix to automatically “rotate” the base vector space base time (e.g. 2003-2007) target time (e.g. 1987-1991) t Vector Space Representation Training by using Skip-gram model Training by using Skip-gram model base time (e.g. 2003-2007) target time (e.g. 1987-1991) walkman cassette music iPod mp3 music 646 into the target vector space. Suppose we have K pairs of temporal counterparts {(1, w1),…,(k, wk,)} where i is a base time term and wi is its counterpart in the target time. Then the transformation matrix Μ can be computed by minimizing the differences between Μ∙i and wi as given in Eq. 1. The latter part of Eq. 1 is added as regularization to overcome the problem of overfitting. Intuitively, matrix M is obtained by making sure that the sum of Euclidean 2-norms between transformed query vectors and their counterparts is minimal on K seed query-counterpart pairs. Eq.1 is used for solving regularized least squares problem (γ equals to 0.02). 2 2 1 2 2 min arg M w M M K i i i M         (1) However, as mentioned before, the other challenge is that the training pairs are difficult to be obtained. It is non-trivial to prepare large enough training data that would also cover various domains and any possible combinations of the base and target time periods. We apply here a simple trick that performs reasonably well. We select terms that (a) have the same syntactic forms in the base and the target time periods and (b) are frequent in the both time periods. Such Common Frequent Terms (CFTs) are then used as the training data. Essentially, we assume here that very frequent terms (e.g., man, women, water, dog, see, three) change their meanings only to small extent. The reasoning is that the more frequently the word is used, the harder is to change its dominant meaning (or the longer time it takes to make the meaning shift) as the word is commonly used by many people. The phenomenon that words used more often in everyday language had evolved more slowly has been observed in several languages including English, Spanish, Russian and Greek (Pargel et al., 2007; Lieberman et al. 2007). Then, using the common frequent terms as the training pairs, we solve Eq. 1 as the least squares problem. Note that the number of CFTs is heuristically decided. In Sec. 5 we discuss transformation performance with regards to different numbers of CFTs. After obtaining matrix Μ, we can then transform the base time term, q, first by multiplying its vector representation with the transformation matrix Μ, and then by calculating the cosine similarity between such transformed vector and the vectors of all the terms in the target time. We call the result of this similarity comparison the correspondence score between the input term q in the base time and a given term w in the target time (see Eq. 2). A term which has the highest correspondence score could be then considered as temporal counterpart of q.     w q M w q ence Correspond , cos ,   (2) 3 Local Correspondence across Time The method described above computes “global similarity” between terms across time. In result, the discovered counterparts can be similar to the query term for variety of reasons, some of which may not always lead to the best results. For instance, the global transformation finds VCR as the temporal counterpart of iPod in 1980s simply because both of them can have recording and playback functions. Macintosh is another term judged to be strongly corresponding to iPod since both are produced by Apple. Clearly, although VCR and Macintosh are somewhat similar to iPod, they are far from being its counterparts. The global transformation, as presented in the previous section, may thus fail to find correct counterparts due to neglecting fundamental relations between a query term and its context. Inspired by these observations, we propose another method for leveraging the informative context terms of an input query term called reference points. They are used to help mapping the query to its correct temporal counterpart by considering the relation between the query and the reference points. We call this kind of similarity matching as local correspondence in contrast to global correspondence described in Sec. 2. In the following sub-sections, we first introduce the desired characteristics of the reference points and we then propose three computation methods for selecting them. Finally, we describe how to find temporal counterparts using the selected reference points. Algorithm 2 shows the process of computing the local transformation. Algorithm 2 Overview of Local Transformation Input: query q, base time TB and target time TT 1. Construct the local graph of q by detecting the reference points in the context of q. (Section 3.1) 2. Compute similarity of the local graph of q with all the local graphs of candidate temporal counterparts in the target time. (Section 3.2) 3. Rank the candidate temporal counterparts in the target time by graph similarity score (Eq. 4). Output: ranked list of temporal counterparts 647 3.1 Reference points detection Reference points are terms in the query’s context which help to build connection between the query and its temporal counterparts. Reference points should have at least some of the following characteristics: (a) have high relation with the query (b) be sufficiently general and (c) be independent from each other. Note that it does not mean that the selected reference point should have exactly same surface form across time. Let us consider the previous example query iPod and 1980s as the target time. The term music could be a candidate reference point for this query. Its temporal counterpart has exactly the same syntax form in the target time (music). However, mp3 could be another reference point. Even though mp3 did not exist in 1980s, it can still be referred to storage devices at the target time such as cassette or disk helping thus to find the correct counterparts of iPod, that is, walkman and CD player. Since different reference points will lead to different answers, we propose three methods for selecting the reference points. Each one considers the previously mentioned characteristics of reference points to different extent. Note that, if necessary, the choice of the references points can be left to users. Term co-occurrence. The first approach satisfies the reference points’ characteristics of being related to the query. To select reference points using this approach we rank context terms by multiplying two factors: tf(c) and relatedness(q,c), where tf(c) is the frequency of a context term c, while relatedness(q,c) is the relation strength of q and c measured by the χ2 test. The test is conducted based on the hypothesis that P(c|q)=P(c|q̄ ), according to which the term c has the same probability of occurring in documents containing query q and in the documents not containing q. We then use the inverse of the p-value obtained from the test as relatedness(q,c). Lexico-syntactic patterns. As the second approach we propose using hypernyms of terms. This corresponds to the characteristic of reference points to be general words. General terms are preferred rather than specific or detailed ones since the former are more probable to be associated with correct temporal counterparts1. This is because detailed or specific terms are less likely to have corresponding terms in the target time. To detect 1 We have experimented with hyponyms and coordinate terms used as reference points and found the results are worse than when using hypernyms. hypernyms on the fly, we adopt the method proposed by Ohshima et al. (2010) that uses bi-directional lexico-syntactic patterns due to its high speed and the lack of requirements for using external ontologies. The latter is important since, to the best of our knowledge, there are no ready ontology resources for arbitrary periods in the past (e.g., there seems to be no Wordnet for the past). Semantic clustering. The last method chooses reference points from clusters of context terms. The purpose of applying clustering is to avoid choosing semantically similar reference points. Clustering helps to select typical terms from different sematic clusters to provide diverse informative context. For grouping the context terms we utilize the bisecting k-means algorithm. It is superior over kmeans and the agglomerative approach (Steinbach et al., 2000) in terms of accuracy. The procedure of bisecting k-means is to, first, select a cluster to split and then to utilize the basic k-means to form two sub-clusters. These two steps are repeated until the desired number of clusters is obtained. The distance between any two terms w1, w2 is the inverse of cosine similarity between their vector representations. ) , cos( 1 ) , ( 2 1 2 1 w w w w Dist   (3) 3.2 Local graph matching Formulation. The local graph of query q is a star shaped graph, denoted as SqFB, in which q is the internal node, and the set of reference points, 𝐹B = {f1, f2,…, fu}, are leaf nodes where u is the number of reference points. Our objective is to find a local graph SwFT in the target vector space that is most similar to SqFB in the base vector space. w denotes here the temporal counterpart of q and FT is the set of terms in the target vector space that corresponds to FB. Algorithm. Step (1): to compare the similarity between two graphs in different vector spaces, every node (i.e. term) in SqFB is required to be transformed first to allow for comparison under the same vector space. So the transformed vector representation of q becomes Μ∙q and FB is transformed to {Μ∙f1, Μ∙f2 …, Μ∙fu} (recall that Μ is the transformation matrix). Step (2): for each node in SqFB, we then choose the top k candidate terms with the highest correspondence score in the target space. Note that we would need to perform k∙ku 648 combinations of nodes (or candidate local graphs) in total, to find the best graph with the highest graph similarity. The computation time becomes then an issue as the number of comparisons grows in polynomial way with the increase in the number of candidate terms. However, we manage to reduce the number of combinations to k∙k∙u by assuming the reference points be independent of each other. Then, for every selected candidate temporal counterpart, we only choose the set of corresponding terms FT which maximizes the current graph similarity. By default we set k equal to 1000. The process is shown in Algorithm 3. Algorithm 3 Local Graph Matching Input: local graph of q, SqFB W = top k corresponding terms of q (by Eq. 2) FF = {top k corresponding terms of each f in reference points FB={ f0, f1, …, fu}} (by Eq. 2) for w = W[1:k] do: sum_cos = 0 # total graph similarity score for F = FF[1:u] do: max_cos = 0 # current maximum similarity for c = F[1:k] do: find c which maximizes current graph similarity end for sum_cos += max_cos end for end for sort W by sum_cos of each w in W. Output: sorted W as ranked list of temporal counterparts Graph similarity computation. To compute the similarity of two star shaped graphs, we take both the semantic and relational similarities into consideration. Fig. 3 conceptually portrays this idea. Since all the computation is done under the same vector space (after transformation), the semantic meaning is represented by the absolute position of the term, that is, by its vector representation in the vector space. On the other hand, the relation is described by the difference of two term vectors. Finally, the graph similarity function g(SqFB,SwFT) is defined as the combination of the relational similarity function, h(SqFB,SwFT), and semantic similarity function, z(SqFB,SwFT), as follows: )) , cos( ) , cos( max( ) ( max ) 1( ) , ( ) , ( ) 1( ) , ( , ,                   T T B B T T B B T B T B T B T B F f F f T B F f F f f w f q F w F q F w F q F w F q w q f f R R S S z S S h S S g     (4) where RqfB is the difference of vectors between q and fB in FB represented as [q-fB]. RwfT is the difference of vectors between w and fT in FT, [w-fT], where fT is selected from k candidates corresponding terms of fB. fT maximizes the cosine similarity between [q- fB] and [w- fT]. λ is set to 0.5 by default. Intuitively, SqFB is a graph composed of query and its reference points, while SwFT is a graph containing candidate word w and its reference points. The first maximum in Eq. 4 finds for each reference point in the base time, fB, the top-k candidate terms corresponding to fB in the target time. Next, it finds within k such fT that similarity between [q- fB] and [w- fT] is maximum (relational similarity). The second maximum in Eq. 4 is same as the first one with the exception that it computes the semantic similarity instead of the relational similarity. The two summations in Eq. 4 aggregate both the similarity scores over all the reference points. Figure 3: The concept of computing semantic and relational similarity in matching local graphs. 4 Experimental Setup 4.1 Training sets For the experiments we use the New York Times Annotated Corpus (Sandhaus, 2008). This dataset contains over 1.8 million newspaper articles published between 1987 and 2007. We first divide it into four parts according to article publication time: [1987-1991], [1992-1996], [1997-2001] and [2002-2007]. Each time period contains then around half a million articles. We next train the model of distributed vector representation separately for each time period. The vocabulary size of the entire corpus is 360k, while the vocabulary size of each time period is around 300k. In the experiments, we first focus on the pair of time periods separated by the longest time gap, that is, [2002, 2007] as the base time and [1987, 1991] as the target time. We also repeat the experiment using more recent target time: [1992, 1996]. base time (e.g. 2003-2007) ipod mp3 music apple target time (e.g. 1987-1991) music cassette walkman sony semantic similarity relational similarity 649 Table 1: Example results where q is the input term and tc is the matching temporal counterpart. The numbers are the ranks of the correct temporal counterpart in the results ranked by each method. Since we output only the top 1000 results, ranks lower than 1000 are represented as 1000+. 4.2 Test sets As far as we know there is no standard test bench for temporal correspondence finding. We then had to manually create test sets containing queries in the base time and their correct temporal counterparts in the target time. In this process we used external resources including the Wikipedia, a Web search engine and several historical textbooks. The test terms cover three types of entities: persons, locations and objects. The examples of the test queries and their temporal counterparts for [1987, 1991] are shown in Table 1 where q denotes the input term and tc is the correct counterpart. Note that the expected answer is not required to be single neither exhaustive. For example, there can be many answers for the same query term, such as letter, mail, fax, all being commonly used counterparts in 1980s for email. Furthermore, as we do not care for recall in this research, we do not require all the correct counterpart terms to be found. In total, there are 95 pairs of terms (query and its counterpart) resulting from 54 input query terms for the task of mapping [2002, 2007] with [1987, 1991], and 50 term pairs created from 25 input query terms for matching [2002, 2007] and [1992, 1996]. 4.3 Evaluation measures and baselines We use the Mean Reciprocal Rank (MRR) as a main metric to evaluate the ranked search results for each method. MRR is expressed as the mean of the inverse ranks for each test where a correct result appears. It is calculated as follows:    N i i rank N MRR 1 1 1 (5) where ranki is the rank of a correct counterpart at the i-th test. N is the number of query-answer pairs. MRR’s values range between [0,1]. The higher the value, the more correct the method is. Besides MRR, we also report precision @1, @5, @10 and @20. They are equal to the rates of tests in which the correct counterpart term tc was found in the top 1, 5, 10 and 20 results, respectively. Baselines. We prepare three baselines: (1) Bag of words approach (BOW) without transformation: this method directly compares the context of the query in the base time with the context of the candidate term in the target time. We use it to examine whether the distributed vector representation and transformation are necessary. (2) Latent Semantic Indexing (LSI) without transformation (LSI-Com): we first merge the documents in the base time and the documents in the target time. Then, we train LSI (Deerwester, 1988) on such combined collection to represent each term by the same distribution of detected topics. We next search for the terms that exist in the target period and that are also semantically similar to the queried terms by comparing their vector q [2002,2007] tc [1987,1991] BOW (baseline) LSI-Com (baseline) LSI-Tran (baseline) GT (proposed) LT-Cooc (proposed) LT-Lex (proposed) LT-Clust (proposed) Putin Yeltsin 1000+ 252 353 24 1 1 1 Chirac Mitterrand 1000+ 8 1 7 19 1 3 iPod Walkman 1000+ 20 131 3 13 1 16 Merkel Kohl 1000+ 1000+ 537 142 76 7 102 Facebook Usenet 1000+ 1000+ 1000+ 1 1 1 1 Linux Unix 1000+ 11 1 20 1 1 1 email letter 1000+ 1000+ 464 1 35 1 17 email mail 1000+ 1 9 7 2 6 11 email fax 1000+ 1000+ 10 3 1 4 2 Pixar Tristar 1000+ 549 1 1 1 1 1 Pixar Disney 1000+ 4 4 3 2 2 4 Serbia Yugoslavia 1000+ 15 1000+ 1 1 1 1 mp3 compact disk 1000+ 56 44 58 17 19 22 Rogge Samaranch 1000+ 4 22 42 82 34 44 Berlin Bonn 1000+ 43 265 62 40 48 56 Czech Czechoslovakia 1000+ 1 3 4 3 7 4 USB floppy disk 1000+ 209 1000+ 20 1 1 4 spam junk mail 1000+ 1000+ 37 5 61 1 1 Kosovo Yugoslavia 1000+ 59 1000+ 14 10 6 11 650 representations. The purpose of using LSI-Com is to check the need for the transformation over time. (3) Latent Semantic Indexing (LSI) with transformation (LSI-Tran): we train two LSI models separately on the documents in the base time and the documents in the target time. Then we train the transformation matrix in the same way as we did for our proposed methods. Lastly, for a given input query, we compare its transformed vector representation with terms in the target time. LSI-Tran is used to investigate if LSI can be an alternative for the vector representation under our transformation scenario. Proposed Methods. All our methods use the neural network based term representation. The first one is the method without considering the local context graph called GT (see Sec. 2). By testing it we want to investigate the necessity of transforming the context of the query in the target time. We also test the three variants of the proposed approach that applies the local graph (explained in Sec. 3). The first one, LT-Lex, constructs the local graph by using the hypernyms of terms. LTCooc applies term co-occurrence to select the reference points. Finally, LT-Clust clusters the context terms by their semantic meanings and selects the most common term from each cluster. 4.4 Parameter settings We set the parameters as follows: (1) num_of_dim: we experimentally set the number of dimensions of the Skip-gram model and the number of topics of LSI to be 200. (2) num_of_CFTs: we utilize the top 5% (18k words) of Common Frequent Terms to train the transformation matrix. We have tried other numbers but we found 5% to perform best (see Fig. 4). (3) u: the number of reference points (same as the number of semantic clusters) is set to be 5. According to the results, we found that increasing the number of reference points does not always improve the results. The performance depends rather on whether the reference points are general enough, as too detailed ones hurt the results. 5 Experimental Results First, we look at the results of finding temporal counterparts in [1987, 1991]. The average scores for each method are shown in Table 2. Table 1 shows detailed results for few example queries. The main finding is that all our methods outperform the baselines when measured by MRR and by the precisions at different ranks. In the following subsections we discuss the results in detail. 5.1 Context change over time The first observation is that the task is quite difficult as evidenced by extremely poor performance of the bag of words approach (BOW). The correct answers in BOW approach are usually found at ranks 10k-30k (recall that the vocabulary size is 360k). This suggests little overlap in the contexts of query and counterpart terms. The fact that all our methods outperform the baselines suggests that the across-time transformation is helpful. 5.2 Using local context graph We can observe from Table 2 that, in general, using the local context graph improves the results. The best performing approach, LT-Lex, improves GT method, which uses only global similarity matching, by 24% when measured using MRR. It increases the precision at certain levels of top ranks, especially, at the top 1, where it boosts the performance by 44%. LT-Lex uses the hypernyms of query as reference points in the local graph. This suggests that using generalized context terms as reference points is most helpful for finding correct temporal counterparts. On the other hand, LT-Cooc and LT-Clust usually fail to improve GT. It may be because the term co-occurrence and semantic clustering approaches detect less general terms that tend to capture too detailed information which is then poorly related to the temporal counterpart. For example, LT-Cooc detects {music, Apple, computer, digital, iTunes} as the reference points of the query iPod. While music is shared by iPod’s counterpart (walkman) and Apple can be considered analogical to Sony, other terms (i.e., computer, digital, iTunes) are rather too specific and unique for iPod. 5.3 Using neural network model When comparing the results of LSI-Com and LSI-Tran in Table 2, we can see that using the transformation does not help LSI to enhance the performance but, on the contrary, it makes the results worse. Method MRR P@1 P@5 P@10 P@20 BOW 4.1E-5 0 0 0 0 LSI-Com 0.206 15.8 27.3 29.5 38.6 LSI-Tran 0.112 7.9 13.6 21.6 22.7 GT 0.298 16.8 44.2 56.8 73.7 LT-Cooc 0.283 18.8 35.3 50.6 62.4 LT-Lex 0.369 24.2 49.5 63.2 71.6 LT-Clust 0.285 14.7 42.1 55.1 65.2 Table 2: Results of searching from present to past (present: 2002-2007; past: 1987-1991). 651 Yet, as discussed above, applying the transformation is good idea in the case of the Neural Network Model. We believe the reason for this is because it is difficult to perform the global transformation between topics underling the dimensions of LSI, in contrast to transforming “semantic dimensions” of Neural Network Model. 5.4 Effect of the number of CFTs Fig. 4 shows MRR results for different numbers of Common Frequent Terms (CFTs) when applying GT method. Note that the level of 0.10% (the first point) corresponds to using 658 stop words as seed pairs. As mentioned before, 5% of CFTs allows to obtain the best results. Figure 4: Results of MRR for GT method depending on number of used CFTs. 5.5 Searching from past to present We next analyze the case of searching from the past to the present. This scenario may apply to the case of a user (perhaps, an older person) who possesses knowledge about the past term but does not know its modern counterparts. Table 3 shows the performance. We can see that, again, all our approaches outperform all the baselines using all the measures. LT-Lex is the best performing approach, when measured by MRR and P@1 and P@20. LT-Cooc this time returns the best results at P@5 and P@10. Method MRR P@1 P@5 P@10 P@20 BOW 3.4E-5 0 0 0 0 LSI-Com 0.181 13.2 19.7 28.9 35.5 LSI-Tran 0.109 5.3 17.1 21.1 23.7 GT 0.226 15.2 27.3 33.3 45.5 LT-Cooc 0.231 14.7 30.7 36 46.7 LT-Lex 0.235 16.7 28.8 31.8 48.5 LT-Clust 0.228 13.6 28.8 31.8 47 Table 3: Average scores of searching from past to present (present: 2002-2007; past: 1987-1991). The objective of testing the search from the past to present is to prove our methods work in both directions. As for now, we can only conclude the performance is asymmetrical. Yet, we might speculate that, along with the increase in distance, searching from past to present could be harder due to present world becoming relatively more diverse when seen from the distant past. 5.6 Results using different time period Finally, we perform additional experiment using another target time period [1992, 1996] to verify whether our approach is still superior on different target time. For the experiment we use the best performing baseline listed in Table 2, LSI-Com, and the best proposed approach, LT-Lex, as well as GT. The results are shown in Tables 4 and 5. LT-Lex outperforms the other baselines in both the search from the present to the past (Table 4) and from the past to the present (Table 5). Note that since the query-answers pairs for [1992, 1996] are different than ones for [1987, 1991], their results cannot be directly compared. Method MRR P@1 P@5 P@10 P@20 LSI-Com 0.115 10.6 14.9 21.3 23.4 GT 0.132 8.5 27.7 40.4 53.2 LT-Lex 0.169 10.6 34.1 48.9 55.3 Table 4: Results of searching from present to past (present: 2002-2007; past: 1992-1996). Method MRR P@1 P@5 P@10 P@20 LSI-Com 0.148 11.6 18.6 23.3 30.2 GT 0.184 11.6 23.3 30.2 44.2 LT-Lex 0.212 14 28 32.6 44.2 Table 5: Results of searching from past to present (present: 2002-2007; past: 1992-1996). 5.7 Confidence of Results The approach described in this paper will always try to output some matching terms to a query in the target time period. However in some cases, no term corresponding to the one in the base time existed in the target time (e.g. when the semantic concept behind the term was not yet born or, on the contrary, it has already felt out of use). For example, junk mail may not have any equivalent in texts created around 1800s. A simple solution to this problem would be to use Eqs. 2 and 4 to serve as measures of confidence behind each result in order to decide whether the found counterparts should or not be shown to users. Note however that the scores returned by Eqs. 2 and 4 need to be first normalized according to the distance between the target time and the base time periods. 0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.00% 2.00% 4.00% 6.00% 8.00% 10.00% 12.00% 14.00% 16.00% MRR for GT method Percentage of used CFTs 652 6 Related Work Temporal changes in word meaning have been an important topic of study within historical linguistics (Aitchison, 2001; Campbell 2004; Labov, 2010; Hughes, 1988). Some researchers employed computational methods for analyzing changes in word senses over time (Mihalcea and Nastase, 2012; Kim et al., 2014; Jatowt and Duh, 2014; Kulkarni et al., 2015). For example, Mihalcea and Nastase (2012) classified words to one of three past epochs based on words’ contexts. Kim et al. (2014) and Kulkarni et al. (2015) computed the degree of meaning change by applying neural networks for word representation. Jatowt and Duh (2014) used also sentiment analysis and word pair comparison for meaning change estimation. Our objective is different as we search for corresponding terms across time, and, in our case, temporal counterparts can have different syntactic forms. Some works considered computing term similarity across time (Kalurachchi et al., 2010; Kanhabua et al. 2010; Tahmasebi et al. 2012, Berberich et al. 2009). Kalurachchi et al. (2010) proposed to discover semantically identical temporally altering concepts by applying association rule mining, assuming that the concepts referred by similar events (verbs) are semantically related. Kanhabua et al. (2010) discovered the change of terms through the comparison of temporal Wikipedia snapshots. Berberich et al. (2009) approached the problem by introducing a HMM model and measuring the across-time sematic similarity between two terms by comparing the contexts captured by co-occurrence measures. Tahmasebi et al. (2012) improved their approach by first detecting the periods of name change and then by analyzing the contexts during the change periods to find the temporal co-references of different names. There are important differences between those works and ours. First, the previous works mainly focused on detecting changes of the names of the same, single entity over time. For example, the objective was to look for the previous name of Pope Benedict (i.e. Joseph Ratzinger) or the previous name of St. Petersburg (i.e. Leningrad). Second, these approaches relied on applying the co-occurrence statistics according to the intuition that if two terms share similar contexts, then these terms are semantically similar. In our work, we do not require the context to be literally same but to have the same meaning. Transfer Learning (Pan et al., 2010) is related to some extent to our work. It has been mainly used in tasks such as POS tagging (Blitzer et al., 2006), text classification (Blitzer et al., 2007; Ling et al., 2008; Wang et al., 2011; Xue et al., 2008), learning to rank (Cai et al., 2011; Gao et al., 2010; Wang et al., 2009) and content-based retrieval (Kato et al., 2012). The temporal correspondence problem can be also understood as a transfer learning as it is a search process that uses samples in the base time for inferring correspondent instances existing in the target time. However, the difference is that we do not only consider the structural correspondence but we also utilize the semantic similarity across time. The idea of distance-preserving projections is also used in automatic translation (Mikolov et al., 2013b). Our research problem is however more difficult and is still unexplored. In the traditional language translation, languages usually share same concepts, while in the across-time translation concepts evolve and thus may be similar but not always same. Furthermore, the lack of training data is another key problem. 7 Conclusions and Future Work This work approaches the problem of finding temporal counterparts as a way to build a “bridge” across different times. Knowing corresponding terms across time can have direct usage in supporting search within longitudinal document collections or be helpful for constructing evolution timelines. We first discuss the key challenge of the temporal counterpart detection – the fact that contexts of terms change, too. We then propose the global correspondence method using transformation between two vector spaces. Based on this, we then introduce more refined approach of computing the local correspondence. Through experiments we demonstrate that the local correspondence using hypernyms outperforms both the baselines and the global correspondence approach. In the future, we plan to test our approaches over longer time spans and to design the way to automatically “explain” temporal counterparts by outputting “evidence” terms for clarifying the similarity between the counterparts. Acknowledgments We thank Makoto P. Kato for valuable comments. This work was supported in part by Grants-in-Aid for Scientific Research (Nos. 15H01718, 15K12158) from MEXT of Japan and by the JST Research Promotion Program Sakigake: “Analyzing Collective Memory and Developing Methods for Knowledge Extraction from Historical Documents”. 653 References J. Aitchison, Language Change, Progress or Decay? Cambridge University Press, 2001. K. Berberich, S. J. Bedathur, M. Sozio and G. Weikum, Bridging the Terminology Gap in Web Archive Search, In Proc. of WebDB’09, 2009. J. Blitzer, M. Dredze, and F. Pereira. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaption for Sentiment Classification. In Proc. of ACL, pages 440-447, 2007. J. Blitzer, R. McDonald, and F. Pereira. Domain adaption with structural correspondence learning. In Proceedings of the 2006 conference on empirical methods in natural language processing. Association for Computational Linguistics (EMNLP), pages 120-128, 2006. P. Cai, W. Gao, A. Zhou et al. Relevant knowledge helps in choosing right teacher: active query selection for ranking adaptation. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 115-124, 2001. L. Campbell, Historical Linguistics, 2nd edition, MIT Press, 2004. S. Deerwester et al., Improving Information Retrieval with Latent Semantic Indexing, In Proceedings of the 51st Annual Meeting of the American Society for Information Science, 25, pages 36–40, 1988. W. Gao, P. Cai, K.F. Wong et al. Learning to rank only using training data from related domain. In Proceedings of the 33rd international ACM SIGIR conference on Research and development in information retrieval, pages 162-169, 2010. G. Hughes, Words in Time: A Social History of the English Vocabulary. Basil Blackwell, 1988. A. Jatowt and K. Duh. A framework for analyzing semantic change of words across time. In Proc. of JCDL, pages 229-238, 2014. A. Kalurachchi, A. S. Varde, S. Bedathur, G. Weikum, J. Peng and A. Feldman, Incorporating Terminology Evolution for Query Translation in Text Retrieval with Association Rules, In Proceedings of the 19th ACM international Conference on Information and Knowledge Management (CIKM), pages 1789-1792, 2010. N. Kanhabua, K. Nørvåg, Exploiting Time-based Synonyms in Searching Document Archives, In Proceedings of the 10th annual joint conference on Digital libraries (JCDL), pages 79-88, 2010. M. P. Kato, H. Ohshima and K. Tanaka. Content-based Retrieval for Heterogeneous Domains: Domain Adaption by Relative Aggregation Points. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 811-820, 2012. Y. Kim, Y-I. Chiu, K. Hanaki, D. Hegde and S. Petrov. Temporal Analysis of Language through Neural Language Models. In Proceedings of the ACL 2014 Workshop on Language Technologies and Computational Social Science, pp. 61-65, 2014. V. Kulkarni, R. Al-Rfou, B. Perozzi, and S. Skiena. 2014. Statistically Significant Detection of Linguistic Change. In Proc. of WWW, pages 625-635, 2015. W. Labov. Principles of Linguistic Change (Social Factors), Wiley-Blackwell, 2010. E. Lieberman, J.-B. Michel, J. Jackson, T. Tang, M. A. Nowak. Quantifying the evolutionary dynamics of language. Nature, 449, 713-716, 2007. X. Ling, W. Dai, G. R. Xue, Q. Yang and Y. Yu. Spectral domain-transfer learning. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 488496, 2008. R. Mihalcea, and V. Nastase, “Word Epoch Disambiguation: Finding How Words Change Over Time” in Proceedings of ACL (2) 2012, pp. 259-263, 2012. T. Mikolov, K. Chen, G. Corrado and J. Dean. Efficient Estimation of Word Representations in Vector Space. In ICLR Workshop, 2013a. T. Mikolov, QV. Le, I. Sutskever. Exploiting similarities among languages for machine translation. CoRR, abs/1309.4168, 2013b. T. Mikolov, I. Sutskever, K. Chen, G. Corrado, and J. Dean. Distributed Representation of Phrases and Their Compositionality. In Advances in Neural Information Processing Systems (NIPS), pages 31113119, 2013c. H. Ohshima and K. Tanaka. High-speed Detection of Ontological Knowledge and Bi-directional LexicoSyntactic Patterns from the Web. Journal of Software, 5(2): 195-205, 2010. S. Pan and Q. Yang. A survey on transfer learning. IEEE Transactions on Knowledge and Data Engineering, 22(10): 1345-1359, 2010. M. Pargel, Q. D. Atkinson and A. Meade. Frequency of word-use predicts rates of lexical evolution 654 throughout Indo-European history. Nature, 449, 717-720, 2007. D. E. Rumelhart, G. E. Hinton, R.J. Williams. Learning internal representations by error propagation. California Univ, San Diego La Jolla Inst. For Cognitive Science, 1985. E. Sandhaus. The New York Times Annotated Corpus Overview. The New York Times Company, Research and Development, pp. 1-22, 2008. https://catalog.ldc.upenn.edu/docs/LDC2008T19/new_york_times_annotated_corpus.pdf M. Steinbach, G. Karypis, V. Kumar. A comparison of document clustering techniques. In Proc. of KDD workshop on text mining. 2000, 400(1): 525-526. N. Tahmasebi, G. Gossen, N. Kanhabua, H. Holzmann, and T. Risse. NEER: An Unsupervised Method for Named Entity Evolution Recognition, In Proc. of Coling, pages 2553-2568, 2012. H. Wang, H. Huang, F. Nie, and C. Ding. Cross-language web page classification via dual knowledge transfer using nonnegative matrix tri-factorization. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 933-942, 2011. B. Wang, J. Tang, W. Fan, S. Chen, Z. Yang and Y. Liu. Heterogeneous cross domain ranking in latent space. In Proceedings of the 18th ACM conference on Information and knowledge management (CIKM), pages 987-996, 2009. G. Xue, W. Dai, Q. Yang, and Y. Yu. Topic-bridged plsa for cross-domain text classification. In Proceedings of the 31st annual international ACM SIGIR conference on Research and development in information retrieval, pages 627-634, 2008. 655
2015
63
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 656–665, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Negation and Speculation Identification in Chinese Language Bowei Zou Qiaoming Zhu Guodong Zhou* Natural Language Processing Lab, School of Computer Science and Technology Soochow University, Suzhou, 215006, China [email protected], {qmzhu, gdzhou}@suda.edu.cn Abstract Identifying negative or speculative narrative fragments from fact is crucial for natural language processing (NLP) applications. Previous studies on negation and speculation identification in Chinese language suffers much from two problems: corpus scarcity and the bottleneck in fundamental Chinese information processing. To resolve these problems, this paper constructs a Chinese corpus which consists of three sub-corpora from different resources. In order to detect the negative and speculative cues, a sequence labeling model is proposed. Moreover, a bilingual cue expansion method is proposed to increase the coverage in cue detection. In addition, this paper presents a new syntactic structure-based framework to identify the linguistic scope of a cue, instead of the traditional chunking-based framework. Experimental results justify the usefulness of our Chinese corpus and the appropriateness of our syntactic structure-based framework which obtained significant improvement over the stateof-the-art on negation and speculation identification in Chinese language. * 1 Introduction Negation and speculation are ubiquitous phenomena in natural language. While negation is a grammatical category which comprises various kinds of devices to reverse the truth value of a proposition, speculation is a grammatical category which expresses the attitude of a speaker towards a statement in terms of degree of certainty, * Corresponding author reliability, subjectivity, sources of information, and perspective (Morante and Sporleder, 2012). Current studies on negation and speculation identification mainly focus on two tasks: 1) cue detection, which aims to detect the signal of a negative or speculative expression, and 2) scope resolution, which aims to determine the linguistic coverage of a cue in sentence, in distinguishing unreliable or uncertain information from facts. For example, (E1) and (E2) include a negative cue and a speculative cue respectively, both denoted in boldface with their linguistic scopes denoted in square brackets (adopted hereinafter). In sentence (E1), the negative cue “不(not)” triggers the scope of “不会追究酒店的这次管理失 职(would not investigate the dereliction of hotel)”, within which the fragment “investigate the dereliction of hotel” is the part that is repudiated; While the speculative cue “有望(expected)” in sentence (E2) triggers the scope “后期仍有望反 弹(is still expected to rebound in the late)”, within which the fragment “the benchmark Shanghai Composite Index will rebound in the late” is the speculative part. (E1) 所有住客均表示[不会追究酒店的这次管 理失职]. (All of guests said that they [would not investigate the dereliction of hotel].) (E2) 尽管上周五沪指盘中还受创业板的下跌 所拖累,但[后期仍有望反弹]. (Although dragged down by GEM last Friday, the benchmark Shanghai Composite Index [is still expected to rebound in the late].) Negation and speculation identification is very relevant for almost all NLP applications involving text understanding which need to discriminate between factual and non-factual information. The treatment of negation and speculation in computational linguistics has been shown to be 656 useful for biomedical text processing (Morante et al., 2008; Chowdhury and Lavelli, 2013), information retrieval (Averbuch, 2004), sentiment analysis (Councill et al., 2010; Zhu et al., 2014), recognizing textual entailment (Snow et al., 2006), machine translation (Baker et al., 2010; Wetzel and Bond, 2012), and so forth. The research on negation and speculation identification in English has received a noticeable boost. However, in contrast to the significant achievements concerning English, the research progress in Chinese language is quite limited. The main reason includes the following two aspects: First, the scarcity of linguistic resource seriously limits the advance of related research. To the best of our knowledge, there are no publicly available standard Chinese corpus of reasonable size annotated with negation and speculation. Second, this may be attributed to the limitations of Chinese information processing. The contributions of this paper are as follows: ● To address the aforementioned first issue, this paper seeks to fill this gap by presenting the Chinese negation and speculation corpus which consists of three kind of sub-corpora annotated for negative and speculative cues, and their linguistic scopes. The corpus has been made publicly available for research purposes and it is freely downloadable from http://nlp.suda.edu.cn/corpus. ● For cue detection, we propose a feature-based sequence labeling model to identify cues. It is worth noting that the morpheme feature is employed to better represent the compositional semantics inside Chinese words. Moreover, for improving the low recall rate which suffers from the unknown cues, we propose a cross-lingual cue expansion strategy based on parallel corpora. ● For scope resolution, we present a new syntactic structure-based framework on dependency tree. Evaluation justifies the appropriateness and validity of this framework on Chinese scope resolution, which outperforms the chunking-based framework that widely used in mainstream scope resolution systems. The layout of the rest paper is organized as follows. Section 2 describes related work. Section 3 provides details about annotation guidelines and also presents statistics about corpus characteristics. Section 4 describes our approach in detail. Section 5 reports and discusses our experimental results. Finally, we conclude our work and indicate some future work in Section 6. 2 Related Work Currently, both cue detection task and scope resolution task are always modeled as a classification problem with the purpose of predicting whether a token is inside or outside the cue and its scope. Among them, feature-based and kernel-based approaches are most popular. In the feature-based framework, Agarwal and Yu (2010) employed a conditional random fields (CRFs) model to detect speculative cues and their scopes on the BioScope corpus. The CRFsbased model achieved an F1-meature of 88% in detecting speculative cues. We train this model on our corpus as the baseline system for cue detection. Our work is different from theirs in that we employ a new feature (morpheme feature) which is particularly appropriate for Chinese. Besides, kernel-based approaches exploit the structure of the tree that connects cue and its corresponding scope. Zou et al. (2013) developed a tree kernel-based system to resolve the scope of negation and speculation, which captures the structured information in syntactic parsing trees. To the best of our knowledge, this system is the best English scope resolution system. For this reason, we train this system on our corpus as the baseline system for scope resolution. Compared with a fair amount of works on English negation and speculation identification, unfortunately, few works has been published on Chinese. Ji et al. (2010) developed a system to detect speculation in Chinese news texts. However, only the speculative sentences have been found out, with no more fine-grained information such as scope. The insufficient study on Chinese negation and speculation identification drives us to construct a high-quality corpus and investigate how to find an approach that is particularly appropriate for Chinese language. 3 Corpus Construction In this section, we elaborate on the overall characteristics of the Chinese Negation and Speculation (abbr., CNeSp) corpus we constructed, including a brief description of the sources that constitute our corpus, general guidelines which illustrated with lots of examples and some special cases, and statistics on the overall results of our corpus. 3.1 Sources To capture the heterogeneity of language use in texts, the corpus consists of three different 657 sources and types, including scientific literature, product reviews, and financial articles. Vincze et al. (2008) described that it is necessary to separate negative and speculative information from factual especially in science articles, because conclusions of science experiment are always described by using diversity of expressions and include hypothetical asserts or viewpoints. For this reason, we adopt the 19 articles from Chinese Journal of Computers (Vol.35(11)), an authoritative academic journal in Chinese, to construct the Scientific Literature sub-corpus. Another part of the corpus consists of 311 articles from “股市及时雨(timely rain for stock market)” column from Sina.com in April, 2013. There are 22.3% and 40.2% sentences in the Financial Article sub-corpus containing negation and speculation respectively. Many researches have investigated the role of negation in sentiment analysis task, as an important linguistic qualifier which leads to a change in polarity. For example, Councill et al. (2010) investigated the problem of determining the polarity of sentiment in movie reviews when negation words occur in the sentences. On the other hand, speculation is a linguistic expression that tends to correlate with subjectivity which is also crucial for sentiment analysis. Pang and Lee (2004) showed that subjectivity detection in the review domain helps to improve polarity classification. Therefore, the Product Review subcorpus consists of 821 comments of hotel service from the website Ctrip.com. 3.2 Annotation Guidelines The guidelines of our CNeSp corpus have partly referred to the existing Bioscope corpus guidelines (BioScope, 2008) in order to fit the needs of the Chinese language. In annotation process, negative or speculative cues and their linguistic scopes in sentence are annotated. There are several general principles below: (G1) Cue is contained in its scope. (G2) The minimal unit that expresses negation or speculation is annotated as a cue. (E3) 该股极有可能再度出现涨停. (The stock is very likely to hit limit up.) To G2, the modifiers such as prepositions, determiners, or adverbs are not annotated as parts of the cue. For example, in Sentence (E3), “极 (very)” is only a modifier of the speculative cue “可能(likely)”, but not a constituent of the cue. For the drawbacks of the Bioscope corpus guidelines either on itself or for Chinese language, we introduced some modifications. These main changes are summarized below: (G3) A cue is annotated only relying on its actual semantic in context. (E4) 大盘不可能再次出现高开低走. (It is not possible that the broader market opens high but slips later again.) To G3, “不可能(not possible)” means that the author denies the possibility of the situation that “the broader market opens high but slips later again”, which contains negative meanings than speculative. Thus, the phrase “不可能(not possible)” should be labeled as a negative cue. (G4) A scope should contain the subject which contributes to the meaning of the content being negated or speculated if possible. (E5) *Once again, the Disorder module does [not contribute positively to the prediction]. The BioScope corpus suggests that the scope of negative adverbs usually starts with the cue and ends at the end of the phrase, clause or sentence (E5). However, in our view, the scope should contain the subject for the integrity of meaning. Following is an exceptional case. (G5) Scope should be a continuous fragment in sentence. (E6) 酒店有高档的配套设施,然而却[不能多给 我们提供一个枕头]. (The hotel are furnished with upscale facilities, but [cannot offer us one more pillow].) Some rhetoric in Chinese language, such as parallelism or ellipsis, often gives rise to separation of some sentence constituents from others. For example, in Sentence (E6), the subject of the second clause should be “ 酒店(the hotel)”, which is omitted. In this situation, we only need to identify the negative or speculative part in sentence than all semantic constituents which can be completed through other NLP technologies, such as zero subject anaphora resolution or semantic role labeling. (G6) A negative or speculative character or word may not be a cue. (E7) 早茶的种类之多不得不赞. (We are difficult not to give credit to the variety of morning snack.) We have come across several cases where the presence of a negative or speculative character or word does not denote negative or speculative meaning. For example, there are lots of double negatives in Chinese language only for emphasizing than negative meanings. In Sentence (E7), obviously, the author wants to emphasis the praise of the variety of breakfast buffet by using 658 the phrase “不得不(be difficult not to)” which does not imply a negative meaning. The CNeSp corpus is annotated by two independent annotators who are not allowed to communicate with each other. A linguist expert resolves the differences between the two annotators and modified the guidelines when they are confronted with problematic issues, yielding the gold standard labeling of the corpus. 3.3 Statistics and Agreement Analysis Table 1 summarizes the chief characteristics of the three sub-corpora, including Scientific Literature (Sci., for short), Financial Article (Fin.), and Product Review (Prod.). As shown in Table 1, out of the total amount of 16,841 sentences more than 20% contained negation or speculation, confirming the availability for corpus. Item Sci. Fin. Prod. #Documents 19 311 821 #Sentences 4,630 7,213 4,998 Avg. Length of Sentences 30.4 30.7 24.1 Negation %Sentence 13.2 17.5 52.9 Avg. Length of Scopes 9.1 7.2 5.1 Speculation %Sentence 21.6 30.5 22.6 Avg. Length of Scopes 12.3 15.0 6.9 (Avg. Length: The average number of Chinese characters.) Table 1. Statistics of corpus. Type Sci. Fin. Prod. Negation Cue 0.96 0.96 0.93 Cue & Scope 0.90 0.91 0.88 Speculation Cue 0.94 0.90 0.93 Cue & Scope 0.93 0.85 0.89 Table 2. Inter-annotator agreement. We measured the inter-annotator agreement of annotating cues and their linguistic scope for all of three sub-corpora between the two independent annotators in terms of Kappa (Cohen, 1960). The results are shown in Table 2. The 2nd and 4th rows of the table show the kappa value of only cue annotation for negation and speculation, respectively. The 3rd and 5th rows show the agreement rate for both cue and its full scope. The most obvious conclusions here are that the identification of speculation is more complicated than negation even for humans because of the higher ambiguity of cues and the longer average length of scopes in speculation. 4 Chinese Negation and Speculation Identification As a pipeline task, negation and speculation identification generally consists of two basic stages, cue detection and scope resolution. The former detects whether a word or phrase implies negative or speculative meanings, while the latter determines the sequences of terms which are dominated by the corresponding cue in sentence. In this section, we improve our cue detection system by using the morpheme features of Chinese characters and expanding the cue clusters based on bilingual parallel corpora. Then, we present a new syntactic structure-based framework for Chinese language, which regards the sub-structures of dependency tree selected by a heuristic rule as scope candidates. 4.1 Cue Detection Most of the existing cue detection approaches are proposed from feature engineering perspective. They formulate cue detection as a classification issue, which is to classify each token in sentence as being the element of cue or not. Feature-based sequence labeling model At the beginning, we explore the performance of an English cue detection system, as described in Agarwal and Yu (2010), which employs a conditional random fields (abbr., CRFs) model with lexical and syntactic features. Unfortunately, the performance is very low on Chinese texts (Section 5.1). This may be attributed to the different characteristic of Chinese language, for example, no word boundaries and lack of morphologic variations. Such low performance drives us to investigate new effective features which are particularly appropriate for Chinese. We employed three kinds of features for cue detection: 1) N-gram features For each character ci, assuming its 5-windows characters are ci-2 ci-1 ci ci+1 ci+2, we adopt following features: ci-2, ci-1, ci, ci+1, ci+2, ci-1ci, cici+1, ci2ci-1ci, ci-1cici+1, cici+1ci+2. 2) Lexical features To achieve high performance as much as possible, we also use some useful basic features which are widely used in other NLP tasks on Chinese. The basic feature set consists of POS tag, the left/right character and its PoS tag. It is worth noting that the cue candidates in our model are characters. Thus, in order to get these features, we substitute them with corresponding features of the words which contain the characters. 3) Morpheme features The word-formation of Chinese implies that almost all of the meanings of a word are made up by the morphemes, a minimal meaningful unit in Chinese language contained in words. This more 659 fine-grained semantics are the compositional semantics inside Chinese words namely. We assume that the morphemes in a given cue are also likely to be contained in other cues. For example, “猜测(guess)” is a given speculative cue which consists of “ 猜(guess)” and “ 测(speculate)”, while the morpheme “猜(guess)” could be appeared in “猜想(suppose)”. In consideration of the Chinese characteristics, we use every potential character in cues to get the morpheme feature. A Boolean feature is taken to represent the morpheme information. Specifically, the characters which appear more than once within different cues in training corpus were selected as the features. The morpheme feature is set to 1, if the character is a negative or speculative morpheme. For the ability of capturing the local information around a cue, we choose CRFs, a conditional sequence model which represents the probability of a hidden state sequence given some observations, as classifier to label each character with a tag indicating whether it is out of a cue (O), the beginning of the cue (B) or a part of the cue except the beginning one (I). In this way, our CRFs-based cue identifier performs sequential labeling by assigning each character one of the three tags and a character assigned with tag B is concatenated with following characters with tag I to form a cue. Cross-lingual Cue Expansion Strategy The feature-based cue detection approach mentioned above shows that a bottleneck lies in low recall (see Table 4). This is probably due to the absence of about 12% negation cues and 17% speculation cues from the training data. It is a challenging task to identify unknown cues with the limited amount of training data. Hence, we propose a cross-lingual cue expansion strategy. In the approach, we take use of the top 5 Chinese cues in training corpus as our “anchor set”. For each cue, we search its automatically aligned English words from a Chinese-English parallel corpus to construct an English word cluster. The parallel corpus consisting of 100,000 sentence pairs is built by using Liu's approach (Liu et al., 2014), which combines translation model with language model to select high-quality translation pairs from 16 million sentence pairs. The word alignment was obtained by running Giza++ (Och and Ney, 2003). In each cluster, we record the frequency of each unique English word. Considering the word alignment errors in cross-lingual clusters, we filter the clusters by word alignment probability which is formulated as below: ( | ) (1 ) ( | )      A E C C E P P w w P w w ( , ) ( , ) (1 ) ( ) ( )      E C E C C E P w w P w w P w P w ( , ) ( , ) (1 ) ( , ) ( , )        E C E C Ei C Ci E i i align w w align w w align w w align w w (1) where ( | ) E C P w w is the translation probability of English word wE conditioned on Chinese word wC, reversely, while ( | ) C E P w w is the translation probability of Chinese word wC conditioned on English word wE. ( , ) m n align w w is the number of alignments of word wm and word wn in parallel corpus. ∑i ( , ) mi n align w w is the sum of the number of alignments which contain word wn. The parameter α∈[0,1] is the coefficient controlling the relative contributions from the two directions of translation probability. Then we conduct the same procedure in the other direction to construct Chinese word clusters anchored by English cues, until no new word comes about. For example, applying the above approach from the cue “可能(may)”, we obtain 59 Chinese speculative cues. All of words in the final expansion cluster are identified as cues. 4.2 Scope Resolution Currently, mainstream approaches formulated the scope resolution as a chunking problem, which classifies every word of a sentence as being inside or outside the scope of a cue. However, unlike in English, we found that plenty of errors occurred in Chinese scope resolution by using words as the basic identifying candidate. In this paper we propose a new framework using the sub-structures of dependency tree as scope candidates. Specifically, given a cue, we adopt the following heuristic rule to get the scope candidates in the dependency tree. Setting constituent X and its siblings as the root nodes of candidate structure of scope, X should be the ancestor node of cue or cue itself. For example, in the sentence “所有住客均表 示不会追究酒店的这次管理失职(All of guests said that they would not investigate the dereliction of hotel)”, the negative cue “不(not)” has four constituent Xs and seven scope candidates, as shown in Figure 1. According to the above rule, three ancestor nodes {Xa: “表示(said)”, Xb: “追究(investigate)”, and Xc: “会(would)”} correspond to three scope candidates (a, b1, and c), 660 Figure 1. Examples of a negative cue and its seven scope candidates in dependency tree. Feature Description Instantiation Cue: C1: Itself Tokens of cue 不(not) C2: PoS PoS of cue d(adverb) Scope candidate: S1: Itself Tokens of headword 追究(investigate) S2: PoS PoS of headword v(verb) S3: Dependency type Dependency type of headword VOB S4: Dependency type of child nodes Dependency type of child nodes of headword ADV+VOB S5: Distance<candidate, left word> Number of dependency arcs between the first word of candidate and its left word 3 S6: Distance<candidate, right word> Number of dependency arcs between the last word of candidate and its right word 0 Relationship between cue and scope candidate: R1: Path Dependency relation path from cue to headword ADV-ADV R2: Distance<cue, headword> Number of dependency arcs between cue and headword 2 R3: Compression path Compression version of path ADV R4: Position Positional relationship of cue with scope candidate L_N(Left-nested) Table 3. Features and their instantiations for scope resolution. and the cue itself is certainly a scope candidate (d). In addition, the Xb node has two siblings in dependency tree {“住客(guests)” and “均(all of)”}. Therefore, the two scope candidates corresponding to them are b2 and b3, respectively. Similarly, the sibling of the Xc node is labeled as candidate c2. A binary classifier is applied to determine each candidate as either part of scope or not. In this paper, we employ some lexical and syntactic features about cue and candidate. Table 3 lists all of the features for scope resolution classification (with candidate b1 as the focus constituent (i.e., the scope candidate) and “不(not)” as the given cue, regarding candidate b1 in Figure 1(2)). For clarity, we categorize the features into three groups according to their relevance with the given cue (C, in short), scope candidate (S, in short), and the relationship between cue andcandidate (R, in short). Figure 2 shows four kinds of positional features between cue and scope candidate we defined (R4). Figure 2. Positional features. Some features proposed above may not be effective in classification. Therefore, we adopt a greedy feature se-lection algorithm as described in (Jiang and Ng, 2006) to pick up positive features incrementally according to their contribu661 tions on the development data. Additionally, a cue should have one continuous block as its scope, but the scope identifier may result in discontinuous scope due to independent candidate in classification. For this reason, we employ a post-processing algorithm as described in Zhu et al. (2010) to identify the boundaries. 5 Experimentation In this section, we evaluate our feature-based sequence labeling model and cross-lingual cue expansion strategy on cue detection, and report the experimental results to justify the appropriateness of our syntactic structure-based framework on scope resolution in Chinese language. The performance is measured by Precision (P), Recall (R), and F1-score (F). In addition, for scope resolution, we also report the accuracy in PCS (Percentage of Correct Scopes), within which a scope is fully correct if the output of scope resolution system and the correct scope have been matched exactly. 5.1 Cue Detection Results of the Sequence Labeling Model Every sub-corpus is randomly divided into ten equal folds so as to perform ten-fold cross validation. Lexical features are gained by using an open-source Chinese language processing platform, LTP1(Che et al., 2010) to perform word segmentation, POS tagging, and syntactic parsing. CRF++0.582 toolkit is employed as our sequence labeling model for cue detection. Table 4 lists the performances of cue detection systems using a variety of features. It shows that the morpheme features derived from the wordformation of Chinese improve the performance for both negation and speculation cue detection systems on all kinds of sub-corpora. However, the one exception occurs in negation cue detection on the Product Review sub-corpus, in which the performance is decreased about 4.55% in precision. By error analysis, we find out the main reason is due to the pseudo cues. For example, “非常(very)” is identified by the negative morpheme “非(-un)”, which is a pseudo cue. Table 4 also shows a bottleneck of our sequence labeling model, which lies in low recall. Due to the diversity of Chinese language, many cues only appear a few times in corpus. For ex 1 http://www.ltp-cloud.com 2 https://crfpp.googlecode.com/svn/trunk/doc/index.html ample, 83% (233/280) of speculative cues appear less than ten times in Financial Article subcorpus. This data sparse problem directly leads to the low recall of cue detection. Negation Speculation Sci. P R F1 P R F1 Agarwal’s 48.75 36.44 41.71 46.16 33.49 38.82 N-gram 64.07 49.64 55.94 62.15 42.87 50.74 +Lexical 76.68 57.36 65.63 70.47 48.31 57.32 +Morpheme 81.37 59.11 68.48 76.91 50.77 61.16 Fin. Agarwal’s 41.93 39.15 40.49 50.39 42.80 46.29 N-gram 56.05 45.48 50.21 60.37 44.16 51.01 +Lexical 71.61 50.12 58.97 68.96 48.72 57.10 +Morpheme 78.94 53.37 63.68 75.43 51.29 61.06 Prod. Agarwal’s 58.47 47.31 52.30 45.88 34.13 39.14 N-gram 71.33 54.69 61.91 49.38 39.31 43.77 +Lexical 86.76 65.41 74.59 64.85 44.63 52.87 +Morpheme 82.21 66.82 73.72 70.06 45.31 55.03 Table 4. Contribution of features to cue detection. Results of the Cross-lingual Cue Expansion Strategy Before cue expansion, we select the parameter α as defined in formula (1) by optimizing the F1measure score of on Financial Article sub-corpus. Figure 3 shows the effect on F1-measure of varying the coefficient from 0 to 1. We can see that the best performance can be obtained by selecting parameter 0.6 for negation and 0.7 for speculation. Then we apply these parameter values directly for cue expansion. Figure 3. The effect of varying the value of parameter α on Financial Article sub-corpus. Table 5 lists the performances of feature-based system, expansion-based system, and the combined system. A word is identified as a cue by combined system if it is identified by one of the above systems (Feat-based or Exp-based) at least. For both negation and speculation, the crosslingual cue expansion approach provides significant improvement over the feature-based sequence labeling model, achieving about 15-20% 662 better recall with little loss in precision. More importantly, the combined system obtains the best performance. Negation Speculation Sci. P R F1 P R F1 Feat-based 81.37 59.11 68.48 76.91 50.77 61.16 Exp-based 68.29 76.24 72.05 62.74 68.07 65.30 Combined 75.17 78.91 76.99 70.98 75.71 73.27 Fin. Feat-based 78.94 53.37 63.68 75.43 51.29 61.06 Exp-based 70.31 64.49 67.27 67.46 68.78 68.11 Combined 72.77 67.02 69.78 71.60 69.03 70.29 Prod. Feat-based 82.21 66.82 73.72 70.06 45.31 55.03 Exp-based 78.30 86.47 82.18 62.18 63.47 62.82 Combined 81.94 89.23 85.43 67.56 69.61 68.57 Table 5. Performance of cue detection. 5.2 Syntactic Structure-based Scope Resolution Considering the effectiveness of different features, we divide the Financial Article sub-corpus into 5 equal parts, within which 2 parts are used for feature selection. Then, the feature selection data are divided into 5 equal parts, within which 4 parts for training and the rest for developing. On this data set, a greedy feature selection algorithm (Jiang and Ng, 2006) is adopted to pick up positive features proposed in Table 3. In addition, SVMLight3 with the default parameter is selected as our classifier. Table 6 lists the performance of selected features. 7 features {C1, C2, S4, S5, S6, R1, R4} are selected consecutively for negation scope resolution, while 9 features {C2, S1, S3, S4, S5, R1, R2, R3, R4} are selected for speculation scope resolution. We will include those selected features in all the remaining experiments. Type Feature set Sci. Fin. Prod. Negation Selected features 62.16 56.07 60.93 All features 59.74 54.20 55.42 Speculation Selected features 54.16 49.64 52.89 All features 52.33 46.27 48.07 Table 6. Feature selection for scope resolution on golden cues (PCS %). The feature selection experiments suggest that the feature C2 (POS of cue) plays a critical role for both negation and speculation scope resolution. It may be due to the fact that cues of different POS usually undertake different syntactic roles. Thus, there are different characteristics in triggering linguistic scopes. For example, an adjective cue may treat a modificatory structure as 3 http://svmlight.joachims.org its scope, while a conjunction cue may take the two connected components as its scope. As a pipeline task, the negation and speculation identification could be regarded as a combination of two sequential tasks: first, cue detection, and then scope resolution. Hence, we turn to a more realistic scenario in which cues are automatically recognized. Type Corpus P R F1 PCS Negation Sci. 55.32 53.06 54.17 59.08 Fin. 42.14 46.37 44.15 49.24 Prod. 50.57 48.55 49.54 52.17 Speculation Sci. 45.68 47.15 46.40 48.36 Fin. 34.21 31.80 32.96 41.33 Prod. 32.64 33.59 33.11 39.78 Table 7. Performance of scope resolution with automatic cue detection. Table 7 lists the performance of scope resolution by using automatic cues. It shows that automatic cue detection lowers the performance by 3.08, 6.83, and 8.76 in PCS for the three subcorpora, respectively; while it lowers the performance by 5.80, 8.31 and 13.11 in PCS for speculation scope resolution on the three sub-corpora, respectively (refer to Table 6). The main reason of performance lost is the error propagation from the automatic cue detection. We employ a start-of-the-art chunking-based scope resolution system (described in Zou et al., (2013)) as a baseline, in which every word in sentence has been labelled as being the element of the scope or not. Table 8 compares our syntactic structure-based framework with the chunkingbased framework on scope resolution. Note that all the performances are achieved on Financial Article sub-corpus by using golden cues. The results in Table 8 shows that our scope resolution system outperforms the chunking ones both on negation and speculation, improving 8.75 and 7.44 in PCS, respectively. Type System PCS Negation Chunking-based 47.32 Ours 56.07 Speculation Chunking-based 42.20 Ours 49.64 Table 8. Comparison with the chunking-based system on Financial Article sub-corpus. 6 Conclusion In this paper we construct a Chinese corpus for negation and speculation identification, which annotates cues and their linguistic scopes. For cue detection, we present a feature-based sequence labeling model, in which the morpheme 663 feature is employed to better catch the composition semantics inside the Chinese words. Complementally, a cross-lingual cue expansion strategy is pro-posed to increase the coverage in cue detection. For scope resolution, we present a new syntactic structure-based framework to identify the linguistic scope of a cue. Evaluation justifies the usefulness of our Chinese corpus and the appropriateness of the syntactic structurebased framework. It also shows that our approach outperforms the state-of-the-art chunking ones on negation and speculation identification in Chinese language. In the future we will explore more effective features to improve the negation and speculation identification in Chinese language, and focus on joint learning of the two subtasks. Acknowledgments This research is supported by the National Natural Science Foundation of China, No.61272260, No.61331011, No.61273320, No.61373097, and the Major Project of College Natural Science Foundation of Jiangsu Province, No.11KJA520003. The authors would like to thank the anonymous reviewers for their insightful comments and suggestions. Reference Shashank Agarwal and Hong Yu. 2010. Detecting hedge cues and their scope in biomedical text with conditional random fields. Journal of Biomedical Informatics, 43, 953-961. Mordechai Averbuch, Tom H. Karson, Benjamin Ben-Ami, Oded Maimon, and Lior Rokach. 2004. Context-sensitive medical information retrieval. In Proceedings of the 11th World Congress on Medical Informatics (MEDINFO’04), 1-8. Kathrin Baker, Michael Bloodgood, Bonnie Dorr, Nathaniel W. Filardo, Lori Levin, and Christine Piatko. 2010. A modality lexicon and its use in automatic tagging. In Proceedings of the Seventh Conference on International Language Resources and Evaluation (LREC’10), 1402-1407. BioScope. 2008. Annotation guidelines. http://www.inf.uszeged.hu/rgai/project/nlp/bioscope/Annotation guidelines2.1.pdf Wanxiang Che, Zhenghua Li, Ting Liu. 2010. LTP: A Chinese language technology platform. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING'10): Demonstrations, 13-16. Md. Faisal Mahbub Chowdhury and Alberto Lavelli. 2013. Exploiting the scope of negations and heterogeneous features for relation extraction: A case study for drug-drug interaction extraction. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT'13), 765-771. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20, 37-46. Isaac Councill, Ryan McDonald, and Leonid Velikovich. 2010. What’s great and what’s not: Learning to classify the scope of negation for improved sentiment analysis. In Proceedings of the Workshop on Negation and Speculation in Natural Language Processing, 51-59. Zhengping Jiang and Hwee T. Ng. 2006. Semantic role labeling of NomBank: A maximum entropy approach. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (EMNLP’06), 138-145. Le Liu, Yu Hong, Hao Liu, Xing Wang, and Jianmin Yao. 2014. Effective selection of translation model training data. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL'14), Short Papers, 569-573. Feng Ji, Xipeng Qiu, Xuanjing Huang. 2010. Exploring uncertainty sentences in Chinese. In Proceedings of the 16th China Conference on Information Retreval, 594-601. Roser Morante, Anthony Liekens, and Walter Daelemans. 2008. Learning the scope of negation in biomedical texts. In Proceedings of the Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing (EMNLP’08), 715-724. Roser Morante and Caroline Sporleder. 2012. Modality and negation: an introduction to the special issue. Comput. Linguist. 38, 2, 223-260. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Comput. Linguist. 29, 1, 19-51. Bo Pang and Lillian Lee. 2004. A sentimental education: sentiment analysis using subjectivity. In Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL'04), 271-278. Rion Snow, Lucy Vanderwende, and Arul Menezes. 2006. Effectively using syntax for recognizing false entailment. In Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Associ664 ation of Computational Linguistics (HLTNAACL’06), 33-40. Veronika Vincze, György Szarvas, Richárd Farkas, György Móra and János Csirik. 2008. The BioScope corpus: biomedical texts annotated for uncertainty, negation and their scopes. BMC Bioinformatics, 9(Suppl 11):S9. Dominikus Wetzel, and Francis Bond. 2012. Enriching parallel corpora for statistical machine translation with semantic negation rephrasing. In Proceedings of the 6th Workshop on Syntax, Semantics and Structure in Statistical Translation, 20-29. Qiaoming Zhu, Junhui Li, Hongling Wang, and Guodong Zhou. 2010. A Unified Framework for Scope Learning via Simplified Shallow Semantic Parsing. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP’10), 714-724. Xiaodan Zhu, Hongyu Guo, Saif Mohammad, and Svetlana Kiritchenko. 2014. An empirical study on the effect of negation words on sentiment. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL’14), 304-313. Bowei Zou, Guodong Zhou, and Qiaoming Zhu. 2013. Tree kernel-based negation and speculation scope detection with structured syntactic parse features. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing (EMNLP’13), 968-976. 665
2015
64
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 666–675, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning Relational Features with Backward Random Walks Ni Lao Google Inc. [email protected] Einat Minkov University of Haifa [email protected] William W. Cohen Carnegie Mellon University [email protected] Abstract The path ranking algorithm (PRA) has been recently proposed to address relational classification and retrieval tasks at large scale. We describe Cor-PRA, an enhanced system that can model a larger space of relational rules, including longer relational rules and a class of first order rules with constants, while maintaining scalability. We describe and test faster algorithms for searching for these features. A key contribution is to leverage backward random walks to efficiently discover these types of rules. An empirical study is conducted on the tasks of graph-based knowledge base inference, and person named entity extraction from parsed text. Our results show that learning paths with constants improves performance on both tasks, and that modeling longer paths dramatically improves performance for the named entity extraction task. 1 Introduction Structured knowledge about entities and the relationships between them can be represented as an edge-typed graph, and relational learning methods often base predictions on connectivity patterns in this graph. One such method is the Path Ranking Algorithm (PRA), a random-walk based relational learning and inference framework due to Lao and Cohen (2010b). PRA is highly scalable compared with other statistical relational learning approaches, and can therefore be applied to perform inference in large knowledge bases (KBs). Several recent works have applied PRA to link prediction in semantic KBs, as well as to learning syntactic relational patterns used in information extraction from the Web (Lao et al., 2012; Gardner et al., 2013; Gardner et al., 2014; Dong et al., 2014). A typical relational inference problem is illustrated in Figure 1. Having relational knowledge represented as a graph, it is desired to infer additional relations of interest between entity pairs. For example, one may wish to infer whether an AthletePlaysInLeague relation holds between nodes HinesWard and NFL. More generally, link prediction involves queries of the form: which entities are linked to a source node s (HinesWard) over a relation of interest r (e.g., r is AlthletePlaysInLeague)? PRA gauges the relevance of a target node t with respect to the source node s and relation r based on a set of relation paths (i.e., sequences of edge labels) that connect the node pair. Each path πi is considered as feature, and the value of feature πi for an instance (s, t) is the probability of reaching t from s following path πi. A classifier is learned in this feature space, using logistic regression. PRA’s candidate paths correspond closely to a certain class of Horn clauses: for instance, the path π = ⟨AthletePlaysForTeam, TeamPlaysInLeague⟩, when used as a feature for the relation r = AthletePlaysForLeague, corresponds to the Horn clause AthletePlaysForTeam(s, z) ∧TeamPlaysInLeague(z, t) →AthletePlaysForLeague(s, t) One difference between PRA’s features and more traditional logical inference is that random-walk weighting means that not all inferences instantiated by a clause will be given the same weight. Another difference is that PRA is very limited in terms of expressiveness. In particular, inductive logic programming 666 Eli Manning Giants AthletePlays ForTeam HinesWard Steelers AthletePlays ForTeam NFL TeamPlays InLeague MLB TeamPlays InLeague TeamPlays InLeague Figure 1: An example knowledge graph (ILP) methods such as FOIL (Quinlan and Cameron-Jones, 1993) learn first-order Horn rules that may involve constants. Consider the following rules as motivating examples. EmployeedByAgent(s, t) ∧IsA(t, SportsTeam) →AthletePlaysForTeam(s, t) t = NFL →AthletePlaysForTeam(s, t) The first rule includes SportsTeam as a constant, corresponding to a particular graph node, which is a the semantic class (hypernym) of the target node t. The second rule simply assigns NFL as the target node for the AthletePlaysForTeam relation; if used probabilistically, this rule can serve as a prior. Neither feature can be expressed in PRA, as PRA features are restricted to edge type sequences. We are interested in extending the range of relational rules that can be represented within the PRA framework, including rules with constants. A key challenge is that this greatly increases the space of candidate rules. Knowledge bases such as Freebase (Bollacker et al., 2008), YAGO (Suchanek et al., 2007), or NELL (Carlson et al., 2010a), may contain thousands of predicates and millions of concepts. The number of features involving concepts as constants (even if limited to simple structures such as the example rules above) will thus be prohibitively large. Therefore, it is necessary to search the space of candidate paths π very efficiently. More efficient candidate generation is also necessary if one attempts to use a looser bound on the length of candidate paths. To achieve this, we propose using backward random walks. Given target nodes that are known to be relevant for relation r, we perform backward random walks (up to finite length ℓ) originating at these target nodes, where every graph node c reachable in this random walk process is considered as a potentially useful constant. Consequently, the relational paths that connect nodes c and t are evaluated as possible random walk features. As we will show, such paths provide informative class priors for relational classification tasks. Concretely, this paper makes the following contributions. First, we outline and discuss a new and larger family of relational features that may be represented in terms of random walks within the PRA framework. These features represent paths with constants, expanding the expressiveness of PRA. In addition, we propose to encode bi-directional random walk probabilities as features; we will show that accounting for this sort of directionality provides useful information about graph structure. Second, we describe the learning of this extended set of paths by means of backward walks from relevant target nodes. Importantly, the search and computation of the extended set of features is performed efficiently, maintaining high scalability of the framework. Concretely, using backward walks, one can compute random walk probabilities in a bi-directional fashion; this means that for paths of length 2M, the time complexity of path finding is reduced from O(|V |2M) to O(|V |M), where |V | is the number of edge types in graph. Finally, we report experimental results for relational inference tasks in two different domains, including knowledge base link prediction and person named entity extraction from parsed text (Minkov and Cohen, 2008). It is shown that the proposed extensions allow one to effectively explore a larger feature space, significantly improving model quality over previously published results in both domains. In particular, incorporating paths with constants significantly improves model quality on both tasks. Bi-directional walk probability computation also enables the learning of longer predicate chains, and the modeling of long paths is shown to substantially improve performance on the person name extraction task. Importantly, learning and inference remain highly efficient in both these settings. 2 Related Work ILP complexity stems from two main sources—the complexity of searching for clauses, and of evaluating them. First-order learning systems (e.g. FOIL, FOCL (Pazzani et al., 1991)) mostly rely on hill-climbing search, 667 i.e., incrementally expanding existing patterns to explore the combinatorial model space, and are thus often vulnerable to local maxima. PRA takes another approach, generating features using efficient random graph walks, and selecting a subset of those features which pass precision and frequency thresholds. In this respect, it resembles a stochastic approach to ILP used in earlier work (Sebag and Rouveirol, 1997).The idea of sampling-based inference and induction has been further explored by later systems (Kuˇzelka and ˇZelezn´y, 2008; Kuˇzelka and ˇZelezn´y, 2009). Compared with conventional ILP or relational learning systems, PRA is limited to learning from binary predicates, and applies random-walk semantics to its clauses. Using sampling strategies (Lao and Cohen, 2010a), the computation of clause probabilities can be done in time that is independent of the knowledge base size, with bounded error rate (Wang et al., 2013). Unlike in FORTE and similar systems, in PRA, sampling is also applied to the induction path-finding stage. The relational feature construction problem (or propositionalization) has previously been addressed in the ILP community—e.g., the RSD system (ˇZelezn´y and Lavraˇc, 2006) performs explicit first-order feature construction guided by an precision heuristic function. In comparison, PRA uses precision and recall measures, which can be readily read off from random walk results. Bi-directional search is a popular strategy in AI, and in the ILP literature. The Aleph algorithm (Srinivasan, 2001) combines top-down with bottom-up search of the refinement graph, an approach inherited from Progol. FORTE (Richards and Mooney, 1991) was another early ILP system which enumerated paths via a bi-directional seach. Computing backward random walks for PRA can be seen as a particular way of bi-directional search, which is also assigned a random walk probability semantics. Unlike in prior work, we will use this probability semantics directly for feature selection. 3 Background We first review the Path Ranking Algorithm (PRA) as introduced by (Lao and Cohen, 2010b), paying special attention to its random walk feature estimation and selection components. 3.1 Path Ranking Algorithm Given a directed graph G, with nodes N, edges E and edge types R, we assume that all edges can be traversed in both directions, and use r−1 to denote the reverse of edge type r ∈R. A path type π is defined as a sequence of edge types r1 . . . rℓ. Such path types may be indicative of an extended relational meaning between graph nodes that are linked over these paths; for example, the path ⟨AtheletePlaysForTeam, TeamPlaysInLeague⟩ implies the relationship “the league a certain player plays for”. PRA encodes P(s →t; πj), the probability of reaching target node t starting from source node s and following path πj, as a feature that describes the semantic relation between s and t. Specifically, provided with a set of selected path types up to length ℓ, Pℓ= {π1, . . . , πm}, the relevancy of target nodes t with respect to the query node s and the relationship of interest is evaluated using the following scoring function score(s, t) = X πj∈Pℓ θjP(s →t; πj), (1) where θ are appropriate weights for the features, estimated in the following fashion. Given a relation of interest r and a set of annotated node pairs {(s, t)}, for which it is known whether r(s, t) holds or not, a training data set D = {(x, y)} is constructed, where x is a vector of all the path features for the pair (s, t)—i.e., the j-th component of x is P(s → t; πj), and y is a boolean variable indicating whether r(s, t) is true. We adopt the closed-world assumption—a set of relevant target nodes Gi is specified for every example source node si and relation r, and all other nodes are treated as negative target nodes. A biased sampling procedure selects only a small subset of negative samples to be included in the objective function (Lao and Cohen, 2010b). The parameters θ are estimated from both positive and negative examples using a regularized logistic regression model. 3.2 PRA Features–Generation and Selection PRA features are of the form P(s →t; πj), denoting the probability of reaching target node t, originating random walk at node s and following edge type sequence πj. These path probabilities need to be estimated for every node pair, as part of both training and inference. High scalability 668 is achieved due to efficient path probability estimation. In addition, feature selection is applied so as to allow efficient learning and avoid overfitting. Concretely, the probability of reaching t from s following path type π can be recursively defined as P(s →t; π) = X z P(s →z; π′)P(z →t; r), (2) where r is the last edge type in path π, and π′ is its prefix, such that adding r to π’ gives π. In the terminal case that π’ is the empty path φ, P(s →z; φ) is defined to be 1 if s = z, and 0 otherwise. The probability P(z →t; r) is defined as 1/|r(z)| if r(z, t), and 0 otherwise, where r(z) is the set of nodes linked to node z over edge type r. It has been shown that P(s →t; π) can be effectively estimated using random walk sampling techniques, with bounded complexity and bounded error, for all graph nodes that can be reached from s over path type π (Lao and Cohen, 2010a). Due to the exponentially large feature space in relational domains, candidate path features are first generated using a dedicated particle filtering path-finding procedure (Lao et al., 2011), which is informed by training signals. Meaningful features are then selected using the following goodness measures, considering path precision and coverage: precision(π) = 1 n X i P(si →Gi; π), (3) coverage(π) = X i I(P(si →Gi; π) > 0). (4) where P(si →Gi; π) ≡P t∈Gi P(si →t; π). The first measure prefers paths that lead to correct nodes with high average probability. The second measure reflects the number of queries for which some correct node is reached over path π. In order for a path type π to be included in the PRA model, it is required that the respective scores pass thresholds, precision(π) ≥a and coverage(π) ≥h, where the thresholds a and h are tuned empirically using training data. 4 Cor-PRA We will now describe the enhanced system, which we call Cor-PRA, for the Constant and Reversed Path Ranking Algorithm. Our goal is to enrich the space of relational rules that can be represented using PRA, while maintaining the scalability of this framework. 4.1 Backward random walks We first introduce backward random walks, which are useful for generating and evaluating the set of proposed relational path types, including paths with constants. As discussed in Sec.4.4, the use of backward random walks also enables the modeling of long relational paths within Cor-PRA. A key observation is that the path probability P(s →t; π) may be computed using forward random walks (Eq. (2)), or alternatively, it can be recursively defined in a backward fashion: P(t ←s; π) = X z P(t ←z; π′−1)P(z ←s; r−1) (5) where π′−1 is the path that results from removing the last edge type r in π′. Here, in the terminal condition that π′−1 = φ, P(t ←z; π′−1) is defined to be 1 for z = t, and 0 otherwise. In what follows, the starting point of the random walk calculation is indicated at the left side of the arrow symbol; i.e., P(s →t; π) denotes the probability of reaching t from s computed using forward random walks, and P(t ←s; π) denotes the same probability, computed in a backward fashion. 4.2 Relational paths with constants As stated before, we wish to model relational rules that may include constants, denoting related entities or concepts. Main questions are, how can relational rules with constants be represented as path probability features? and, how can meaningful rules with constants be generated and selected efficiently? In order to address the first question, let us assume that a set of constant nodes {c}, which are known to be useful with respect to relation r, has been already identified. The relationship between each constant c and target node t may be represented in terms of path probability features, P(c →t; π). For example, the rule IsA(t, SportsTeam) corresponds to a path originating at constant SportsTeam, and reaching target node t over a direct edge typed IsA−1. Such paths, which are independent of the source node s, readily represent the semantic type, or other 669 characteristic attributes of relevant target nodes. Similarly, a feature (c, φ), designating a constant and an empty path, forms a prior for the target node identity. The remaining question is how to identify meaningful constant features. Apriori, candidate constants range over all of the graph nodes, and searching for useful paths that originate at arbitrary constants is generally intractable. Provided with labeled examples, we apply the path-finding procedure for this purpose, where rather than search for high-probability paths from source node s to target t, paths are explored in a backward fashion, initiating path search at the known relevant target nodes t ∈Gi per each labeled query. This process identifies candidate (c, π) tuples, which give high P(c ←t; π−1) values, at bounded computation cost. As a second step, P(c →t; π) feature values are calculated, where useful path features are selected using the precision and coverage criteria. Further details are discussed in Section 4.4. 4.3 Bi-directional Random Walk Features The PRA algorithm only uses features of the form P(s →t; π). In this study we also consider graph walk features in the inverse direction of the form P(s ← t; π−1). Similarly, we consider both P(c →t; π) and P(c ←t; π−1). While these path feature pairs represent the same logical expressions, the directional random walk probabilities may greatly differ. For example, it may be highly likely for a random walker to reach a target node representing a sports team t from node s denoting a player over a path π that describes the functional AthletePlaysForTeam relation, but unlikely to reach a particular player node s from the multiplayer team t via the reversed path π−1. In general, there are six types of random walk probabilities that may be modeled as relational features following the introduction of constant paths and inverse path probabilities. The random walk probabilities between s and constant nodes c, P(s →c; π) and P(s ←c; π), do not directly affect the ranking of candidate target nodes, so we do not use them in this study. It is possible, however, to generate random walk features that combine these probabilities with random walks starting or ending with t through conjunction. Algorithm 1 Cor-PRA Feature Induction1 Input training queries {(si, Gi)}, i = 1...n for each query (s, G) do 1. Path exploration (i). Apply path-finding to generate paths Ps up to length ℓthat originate at si. (ii). Apply path-finding to generate paths Pt up to length ℓthat originate at every ti ∈Gi. 2. Calculate random walk probabilities: for each πs ∈Ps: do compute P(s →x; πs) and P(s ←x; π−1 s ) end for for each πt ∈Pt: do compute P(G →x; πt) and P(G ←x; π−1 t ) end for 3. Generate constant paths candidates: for each (x ∈N, π ∈Pt) with P(G →x|πt) > 0 do propose path feature P(c ←t; π−1 t ) setting c = x, and update its statistics by coverage += 1. end for for each (x ∈N, π ∈Pt) with P(G ←x|π−1 t ) > 0 do propose P(c →t; πt) setting c = x and update its statistics by coverage += 1 end for 4. Generate long (concatenated) path candidates: for each (x ∈N, πs ∈Ps, πt ∈Pt) with P(s → x|πs) > 0 and P(G ←x|π−1 t ) > 0 do propose long path P(s →t; πs.π−1 t ) and update its statistics by coverage += 1, and precision += P(s →x|πs)P(G ←x|π−1 t )/n. end for for each (x ∈N, πs ∈Ps, πt ∈Pt) with P(s ← x|π−1 s ) > 0 and P(G →x|πt) > 0 do propose long path P(s ←t; πt.π−1 s ) and update its statistics by coverage += 1, and precision += P(s ←x|π−1 s )P(G →x|πt)/n. end for end for 4.4 Cor-PRA feature induction and selection The proposed feature induction procedure is outlined in Alg. 1. Given labeled node pairs, the particle-filtering path-finding procedure is first applied to identify edge type sequences up to length ℓthat originate at either source nodes si or relevant target nodes ti (step 1). Bi-directional path probabilities are then calculated over these paths, recording the terminal graph nodes x (step 2). Note that since the set of nodes x may be large, path probabilities are all computed with respect to s or t as starting points. As a result of the induction process, candidate relational paths involving constants are identified, and are associated with precision and coverage statistics (step 3). Further, long paths up to length 2ℓare formed between the source and target nodes as the combination of paths πs from the source side and path πt from the target side, updating accuracy and coverage statistics for the concatenated paths πsπt 670 (step 4). Following feature induction, feature selection is applied. First, random walks are performed for all the training queries, so as to obtain complete (rather than sampled) precision and coverage statistics per path. Then relational paths, which pass respective tuned thresholds are added to the model. We found, however, that applying this strategy for paths with constants often leads to over-fitting. We therefore select only the top K constant features in terms of F12, where K is tuned using training examples. Finally, at test time, random walk probabilities are calculated for the selected paths, starting from either s or c nodes per query–since the identity of relevant targets t is unknown, but rather has to be revealed. 5 Experiments In this section, we report the results of applying Cor-PRA to the tasks of knowledge base inference and person named entity extraction from parsed text. We performed 3-fold cross validation experiments, given datasets of labeled queries. For each query node in the evaluation set, a list of graph nodes ranked by their estimated relevancy to the query node s and relation r is generated. Ideally, relevant nodes should be ranked at the top of these lists. Since the number of correct answers is large for some queries, we report results in terms of mean average precision (MAP), a measure that reflects both precision and recall (Turpin and Scholer, 2006). The coverage and precision thresholds of Cor-PRA were set to h = 2 and a = 0.001 in all of the experiments, following empirical tuning using a small subset of the training data. The particle filtering path-finding algorithm was applied using the parameter setting wg = 106, so as to find useful paths with high probability and yet constrain the computational cost. Our results are compared against the FOIL algorithm3, which learns first-order horn clauses. In order to evaluate FOIL using MAP, its candidate beliefs are first ranked by the number of FOIL rules they match. We further report results using Random Walks with Restart (RWR), also 2F1 is the harmonic mean of precision and recall, where the latter is defined as coverage total number targets in training queries 3http://www.rulequest.com/Personal/ Table 1: MAP and training time [sec] on KB inference and NE extraction tasks. consti denotes constant paths up to length i. KB inference NE extraction Time MAP Time MAP RWR 25.6 0.429 7,375 0.017 FOIL 18918.1 0.358 366,558 0.167 PRA 10.2 0.477 277 0.107 CoR-PRA-no-const 16.7 0.479 449 0.167 CoR-PRA-const2 23.3 0.524 556 0.186 CoR-PRA-const3 27.1 0.530 643 0.316 known as personalized PageRank (Haveliwala, 2002), a popular random walk based graph similarity measure, that has been shown to be fairly successful for many types of tasks (e.g., (Agirre and Soroa, 2009; Moro et al., 2014)). Finally, we compare against PRA, which models relational paths in the form of edge-sequences (no constants), using only uni-directional path probabilities, P(s →t; π). All experiments were run on a machine with a 16 core Intel Xeon 2.33GHz CPU and 24Gb of memory. All methods are trained and tested with the same data splits. We report the total training time of each method, measuring the efficiency of inference and induction as a whole. 5.1 Knowledge Base Inference We first consider relational inference in the context of NELL, a semantic knowledge base constructed by continually extracting facts from the Web (Carlson et al., 2010b). This work uses a snapshot of the NELL knowledge base graph, which consists of ∼1.6M edges comprised of 353 edge types, and ∼750K nodes. Following Lao et al. (2011), we test our approach on 16 link prediction tasks, targeting relations such as Athlete-plays-in-league, Team-plays-in-league and Competes-with. Table 1 reports MAP results and training times for all of the evaluated methods. The maximum path length of RWR, PRA, and CoR-PRA are set to 3 since longer path lengths do not result in better MAPs. As shown, RWR performance is inferior to PRA; unlike the other approaches, RWR is merely associative and does not involve path learning. PRA is significantly faster than FOIL due to its particle filtering approach in feature induction and inference. It also results in a better MAP performance due to its ability to combine random walk features in a discriminative model. 671 1 10 100 1000 2 3 4 5 Path Finding Time (s) Max Path Length 2F+1B 3F+1B 3F 2F+2B 3F+2B 1F+1B 2F 4F 0.2 0.3 0.4 0.5 2 3 4 5 MAP Max Path Length 2F+1B 3F+1B 3F 2F+2B 3F+2B 1F+1B 2F 4F (a) (b) 0.1 1 10 100 1000 3 4 5 6 Path Discovery Time (s) Max Path Length 2F+1B 3F+1B 3F 4F 2F+2B 3F+2B 5F 4F+2B 3F+3B 4F+1B 0.00 0.05 0.10 0.15 0.20 3 4 5 6 MAP Max Path Length 3F 2F+1B 4F 5F 3F+1B 2F+2B 3F+2B 4F+2B 3F+3B 4F+1B (c) (d) Figure 2: Path finding time (a) and MAP (b) for the KB inference (top) and name extraction (bottom) tasks. A marker iF + jB indicates the maximum path exploration depth i from query node s and j from target node t–so that the combined path length is up to i+j. No paths with constants were used. Table 1 further displays the evaluation results of several variants of CoR-PRA. As shown, modeling features that encode random walk probabilities in both directions (CoR-PRA-no-const), yet no paths with constants, requires longer training times, but results in slightly better performance compared with PRA. Note that for a fixed path length, CoR-PRA has “forward” features of the form P(s →t; π), the probability of reaching target node t from source node s over path π (similarly to PRA), as well as backward features of the form P(s ←t; π−1), the probability of reaching s from t over the backward path π−1. As mentioned earlier these probabilities are not the same; for example, a player usually plays for one team, whereas a team is linked to many players. Performance improves significantly, however, when paths with constants are further added. The table includes our results using constant paths up to length ℓ= 2 and ℓ= 3 (denoted as CoR-PRA-constℓ). Based on tuning experiments on one fold of the data, K = 20 top-rated constant paths were included in the models.4 We found that these paths provide informative class priors; 4MAP performance peaked at roughly K = 20, and gradually decayed as K increased. Table 2: Example paths with constants learnt for the knowledge base inference tasks. (φ denotes empty paths.) Constant path Interpretation r=athletePlaysInLeague P(mlb →t; φ) Bias toward MLB. P(boston braves →t; The leagues played by athletePlaysForTeam−1, Boston Braves university athletePlaysInLeague⟩) team members. r=competesWith P(google →t; φ) Bias toward Google. P(google →t; Companies which compete ⟨competesWith, competesWith⟩)with Google’s competitors. r=teamPlaysInLeague P(ncaa →t; φ) Bias toward NCAA. P(boise state →t; The leagues played by Boise ⟨teamPlaysInLeague⟩) State university teams. example paths and their interpretation are included in Table 2. Figure 2(a) shows the effect of increasing the maximal path length on path finding and selection time. The leftmost (blue) bars show baseline performance of PRA, where only forward random walks are applied. It is clearly demonstrated that the time spent on path finding grows exponentially with ℓ. Due to memory limitations, we were able to execute forward-walk models only up to 4 steps. The bars denoted by iF + jB show the results of combining forward walks up to length i with backward walks of up to j = 1 or j = 2 steps. Time complexity using bidirectional random walks is dominated by the longest path segment (either forward or backward)—e.g., the settings 3F, 3F + 1B, 3F + 2B have similar time complexity. Using bidirectional search, we were able to consider relational paths up to length 5. Figure 2(b) presents MAP performance, where it is shown that extending the maximal explored path length did not improve performance in this case. This result indicates that meaningful paths in this domain are mostly short. Accordingly, path length was set to 3 in the respective main experiments. 5.2 Named Entity Extraction We further consider the task of named entity extraction from a corpus of parsed texts, following previous work by Minkov and Cohen (2008). In this case, an entity-relation graph schema is used to represent a corpus of parsed sentences, as illustrated in Figure 3. Graph nodes denoting word mentions (in round edged boxes) are linked over edges typed with dependency relations. The 672 parsed sentence structures are connected via nodes that denote word lemmas, where every word lemma is linked to all of its mentions in the corpus via the special edge type W. We represent part-of-speech tags as another set of graph nodes, where word mentions are connected to the relevant tag over POS edge type. In this graph, task-specific word similarity measures can be derived based on the lexico-syntactic paths that connect word types (Minkov and Cohen, 2014). The task defined in the experiments is to retrieve a ranked list of person names given a small set of seeds. This task is implemented in the graph as a query, where we let the query distribution be uniform over the given seeds (and zero elsewhere). That is, our goal is to find target nodes that are related to the query nodes over the relation r =similar-to, or, coordinate-term. We apply link prediction in this case with the expected result of generating a ranked list of graph nodes, which is populated with many additional person names. The named entity extraction task we consider is somewhat similar to the one adopted by FIGER (Ling and Weld, 2012), in that a finer-grain category is being assigned to proposed named entities. Our approach follows however set expansion settings (Wang and Cohen, 2007), where the goal is to find new instances of the specified type from parsed text. In the experiments, we use the training set portion of the MUC-6 data set (MUC, 1995), represented as a graph of 153k nodes and 748K edges. We generated 30 labeled queries, each comprised of 4 person names selected randomly from the person names mentioned in the data set. The MUC corpus is fully annotated with entity names, so that relevant target nodes (other person names) were readily sampled. Extraction performance was evaluated considering the tagged person names, which were not included in the query, as the correct answer set. The maximum path length of RWR, PRA, and CoR-PRA are set to 6 due to memory limitation. Table 1 shows that PRA is much faster than RWR or FOIL on this data set, giving competitive MAP performance to FOIL. RWR is generally ineffective on this task, because similarity in this domain is represented by a relatively small set of long paths, whereas RWR express local node associations in the W BillGates BillGates founded founded nsubj W W SteveJobs SteveJobs founded nsubj W vbd POS POS nnp POS POS CEO appos CEO appos W W CEO nnp POS POS Words/ POSs Tokens Tokens Figure 3: Part of a typed graph representing a corpus of parsed sentences. Table 3: Highly weighted paths with constants learnt for the person name extraction task. Constant path Interpretation P (said ←t; W −1, nsubj, W ) The subjects of ‘said’ or ‘say’ P (says ←t; W −1, nsubj, W ) are likely to be a person name. P (vbg ←t; P OS−1, nsubj, W ) Subjects, proper nouns, and P (nnp ←t; P OS−1, W ) nouns with apposition or P (nn ←t; P OS−1, appos−1, W ) possessive constructions, are P (nn ←t; P OS−1, poss, W ) likely to be person names. graph (Minkov and Cohen, 2008). Modeling inverse path probabilities improves performance substantially, and adding relational features with constants boosts performance further. The constant paths learned encode lexical features, as well as provide useful priors, mainly over different part-of-speech tags. Example constant paths that were highly weighted in the learned models and their interpretation are given in Table 3. Figure 2(c) shows the effect of modeling long relational paths using bidirectional random walks in the language domain. Here, forward path finding was applied to paths up to length 5 due to memory limitation. The figure displays the results of exploring paths up to a total length of 6 edges, performing backward search from the target nodes of up to j = 1, 2, 3 steps. MAP performance (Figure 2(d)) using paths of varying lengths shows significant improvements as the path length increases. Top weighted long features include: P(s →t; W −1, conj and−1, W, W −1, conj and, W) P(s →t; W −1, nn, W, W −1, appos−1, W) P(s →t; W −1, appos, W, W −1, appos−1, W) These paths are similar to the top ranked paths found in previous work (Minkov and Cohen, 2008). In comparison, their results on this dataset using paths of up to 6 steps measured 0.09 in MAP. Our results reach roughly 0.16 in MAP due to modeling of inverse paths; and, when constant 673 paths are incorporated, MAP reaches 0.32. Interestingly, in this domain, FOIL generates fewer yet more complex rules, which are characterised with low recall and high precision, such as: W(B, A) ∧POS(B, nnp) ∧nsubj(D, B) ∧ W(D, said) ∧appos(B, F) →person(A). Note that subsets of these rules, namely, POS(B, nnp), nsubj(D, B) ∧W(D, said) and appos(B, F) have been discovered by PRA as individual features assigned with high weights (Table 3). This indicates an interesting future work, where products of random walk features can be used to express their conjunctions. 6 Conclusion We have introduced CoR-PRA, extending an existing random walk based relational learning paradigm to consider relational paths with constants, bi-directional path features, as well as long paths. Our experiments on knowledge base inference and person name extraction tasks show significant improvements over previously published results, while maintaining efficiency. An interesting future direction is to use products of these random walk features to express their conjunctions. Acknowledgments We thank the reviewers for their helpful feedback. This work was supported in part by BSF grant No. 2010090 and a grant from Google Research. References Eneko Agirre and Aitor Soroa. 2009. Personalizing pagerank for word sense disambiguation. In Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247–1250. ACM. A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. Hruschka Jr., and T. Mitchell. 2010a. Toward an architecture for never-ending language learning. In AAAI. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr., and Tom M. Mitchell. 2010b. Toward an Architecture for Never-Ending Language Learning. In AAAI. Xin Dong, Evgeniy Gabrilovich, Geremy Heitz, Wilko Horn, Ni Lao, Kevin Murphy, Thomas Strohmann, Shaohua Sun, and Wei Zhang. 2014. Knowledge vault: a web-scale approach to probabilistic knowledge fusion. In The 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14, New York, NY, USA August 24 - 27, 2014, pages 601–610. Matt Gardner, Partha Pratim Talukdar, Bryan Kisiel, and Tom Mitchell. 2013. Improving learning and inference in a large knowledge-base using latent syntactic cues. In EMNLP. Matt Gardner, Partha Talukdar, Jayant Krishnamurthy, and Tom Mitchell. 2014. Incorporating Vector Space Similarity in Random Walk Inference over Knowledge Bases. In EMNLP. Taher H. Haveliwala. 2002. Topic-sensitive pagerank. In WWW, pages 517–526. Ondˇrej Kuˇzelka and Filip ˇZelezn´y. 2008. A restarted strategy for efficient subsumption testing. Fundam. Inf., 89(1):95–109, January. Ondˇrej Kuˇzelka and Filip ˇZelezn´y. 2009. Block-wise construction of acyclic relational features with monotone irreducibility and relevancy properties. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, pages 569–576, New York, NY, USA. ACM. Ni Lao and William W. Cohen. 2010a. Fast query execution for retrieval models based on path-constrained random walks. In Proceedings of the 16th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’10, pages 881–888, New York, NY, USA. ACM. Ni Lao and William W. Cohen. 2010b. Relational retrieval using a combination of path-constrained random walks. In Machine Learning, volume 81, pages 53–67, July. Ni Lao, Tom M. Mitchell, and William W. Cohen. 2011. Random Walk Inference and Learning in A Large Scale Knowledge Base. In EMNLP, pages 529–539. Ni Lao, Amarnag Subramanya, Fernando Pereira, and William W. Cohen. 2012. Reading the web with learned syntactic-semantic inference rules. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1017–1026, Stroudsburg, PA, USA. Association for Computational Linguistics. X. Ling and D.S. Weld. 2012. Fine-grained entity recognition. In Proceedings of the 26th Conference on Artificial Intelligence (AAAI). Einat Minkov and William W Cohen. 2008. Learning Graph Walk Based Similarity Measures for Parsed Text. EMNLP. 674 Einat Minkov and William W. Cohen. 2014. Adaptive graph walk-based similarity measures for parsed text. Natural Language Engineering, 20(3). Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2. 1995. MUC6 ’95: Proceedings of the 6th Conference on Message Understanding, Stroudsburg, PA, USA. Association for Computational Linguistics. Michael Pazzani, Cliff Brunk, and Glenn Silverstein. 1991. A Knowledge-Intensive Approach to Learning Relational Concepts. In Proceedings of the Eighth International Workshop on Machine Learning, pages 432–436. Morgan Kaufmann. J. Ross Quinlan and R. Mike Cameron-Jones. 1993. FOIL: A Midterm Report. In ECML, pages 3–20. B L Richards and R J Mooney. 1991. First-Order Theory Revision. In Proceedings of the 8th International Workshop on Machine Learning, pages 447–451. Morgan Kaufmann. Michele Sebag and Celine Rouveirol. 1997. Tractable induction and classification in first order logic via stochastic matching. In Proceedings of the Fifteenth International Joint Conference on Artifical Intelligence - Volume 2, IJCAI’97, pages 888–893, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Ashwin Srinivasan. 2001. The Aleph Manual. In http://web.comlab.ox.ac.uk/oucl/research/areas/machlearn/Aleph/. F. Suchanek, G. Kasneci, and G. Weikum. 2007. YAGO - A Core of Semantic Knowledge. In WWW. Andrew Turpin and Falk Scholer. 2006. User performance versus precision measures for simple search tasks. In PProceedings of the international ACM SIGIR conference on Research and development in information retrieval (SIGIR). Filip ˇZelezn´y and Nada Lavraˇc. 2006. Propositionalization-based relational subgroup discovery with rsd. Mach. Learn., 62(1-2):33–63, February. Richard C Wang and William W Cohen. 2007. Language-independent set expansion of named entities using the web. In Proceedings of the IEEE International Conference on Data Mining (ICDM). William Yang Wang, Kathryn Mazaitis, and William W Cohen. 2013. Programming with personalized pagerank: A locally groundable first-order probabilistic logic. Proceedings of the 22nd ACM International Conference on Information and Knowledge Management (CIKM 2013). 675
2015
65
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 676–686, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Learning the Semantics of Manipulation Action Yezhou Yang† and Yiannis Aloimonos† and Cornelia Ferm¨uller† and Eren Erdal Aksoy‡ † UMIACS, University of Maryland, College Park, MD, USA {yzyang, yiannis, fer}@umiacs.umd.edu ‡ Karlsruhe Institute of Technology, Karlsruhe, Germany [email protected] Abstract In this paper we present a formal computational framework for modeling manipulation actions. The introduced formalism leads to semantics of manipulation action and has applications to both observing and understanding human manipulation actions as well as executing them with a robotic mechanism (e.g. a humanoid robot). It is based on a Combinatory Categorial Grammar. The goal of the introduced framework is to: (1) represent manipulation actions with both syntax and semantic parts, where the semantic part employs λ-calculus; (2) enable a probabilistic semantic parsing schema to learn the lambda-calculus representation of manipulation action from an annotated action corpus of videos; (3) use (1) and (2) to develop a system that visually observes manipulation actions and understands their meaning while it can reason beyond observations using propositional logic and axiom schemata. The experiments conducted on a public available large manipulation action dataset validate the theoretical framework and our implementation. 1 Introduction Autonomous robots will need to learn the actions that humans perform. They will need to recognize these actions when they see them and they will need to perform these actions themselves. This requires a formal system to represent the action semantics. This representation needs to store the semantic information about the actions, be encoded in a machine readable language, and inherently be in a programmable fashion in order to enable reasoning beyond observation. A formal representation of this kind has a variety of other applications such as intelligent manufacturing, human robot collaboration, action planning and policy design, etc. In this paper, we are concerned with manipulation actions, that is actions performed by agents (humans or robots) on objects, resulting in some physical change of the object. However most of the current AI systems require manually defined semantic rules. In this work, we propose a computational linguistics framework, which is based on probabilistic semantic parsing with Combinatory Categorial Grammar (CCG), to learn manipulation action semantics (lexicon entries) from annotations. We later show that this learned lexicon is able to make our system reason about manipulation action goals beyond just observation. Thus the intelligent system can not only imitate human movements, but also imitate action goals. Understanding actions by observation and executing them are generally considered as dual problems for intelligent agents. The sensori-motor bridge connecting the two tasks is essential, and a great amount of attention in AI, Robotics as well as Neurophysiology has been devoted to investigating it. Experiments conducted on primates have discovered that certain neurons, the so-called mirror neurons, fire during both observation and execution of identical manipulation tasks (Rizzolatti et al., 2001; Gazzola et al., 2007). This suggests that the same process is involved in both the observation and execution of actions. From a functionalist point of view, such a process should be able to first build up a semantic structure from observations, and then the decomposition of that same structure should occur when the intelligent agent executes commands. Additionally, studies in linguistics (Steedman, 2002) suggest that the language faculty develops in humans as a direct adaptation of a more primitive apparatus for planning goal-directed action in the world by composing affordances of tools and consequences of actions. It is this more primitive 676 apparatus that is our major interest in this paper. Such an apparatus is composed of a “syntax part” and a “semantic part”. In the syntax part, every linguistic element is categorized as either a function or a basic type, and is associated with a syntactic category which either identifies it as a function or a basic type. In the semantic part, a semantic translation is attached following the syntactic category explicitly. Combinatory Categorial Grammar (CCG) introduced by (Steedman, 2000) is a theory that can be used to represent such structures with a small set of combinators such as functional application and type-raising. What do we gain though from such a formal description of action? This is similar to asking what one gains from a formal description of language as a generative system. Chomskys contribution to language research was exactly this: the formal description of language through the formulation of the Generative and Transformational Grammar (Chomsky, 1957). It revolutionized language research opening up new roads for the computational analysis of language, providing researchers with common, generative language structures and syntactic operations, on which language analysis tools were built. A grammar for action would contribute to providing a common framework of the syntax and semantics of action, so that basic tools for action understanding can be built, tools that researchers can use when developing action interpretation systems, without having to start development from scratch. The same tools can be used by robots to execute actions. In this paper, we propose an approach for learning the semantic meaning of manipulation action through a probabilistic semantic parsing framework based on CCG theory. For example, we want to learn from an annotated training action corpus that the action “Cut” is a function which has two arguments: a subject and a patient. Also, the action consequence of “Cut” is a separation of the patient. Using formal logic representation, our system will learn the semantic representations of “Cut”: Cut :=(AP\NP)/NP : λx.λy.cut(x, y) →divided(y) Here cut(x, y) is a primitive function. We will further introduce the representation in Sec. 3. Since our action representation is in a common calculus form, it enables naturally further logical reasoning beyond visual observation. The advantage of our approach is twofold: 1) Learning semantic representations from annotations helps an intelligent agent to enrich automatically its own knowledge about actions; 2) The formal logic representation of the action could be used to infer the object-wise consequence after a certain manipulation, and can also be used to plan a set of actions to reach a certain action goal. We further validate our approach on a large publicly available manipulation action dataset (MANIAC) from (Aksoy et al., 2014), achieving promising experimental results. Moreover, we believe that our work, even though it only considers the domain of manipulation actions, is also a promising example of a more closely intertwined computer vision and computational linguistics system. The diagram in Fig.1 depicts the framework of the system. Figure 1: A CCG based semantic parsing framework for manipulation actions. 2 Related Works Reasoning beyond appearance: The very small number of works in computer vision, which aim to reason beyond appearance models, are also related to this paper. (Xie et al., 2013) proposed that beyond state-of-the-art computer vision techniques, we could possibly infer implicit information (such as functional objects) from video, and they call them “Dark Matter” and “Dark Energy”. (Yang et al., 2013) used stochastic tracking and graphcut based segmentation to infer manipulation consequences beyond appearance. (Joo et al., 2014) used a ranking SVM to predict the persuasive motivation (or the intention) of the photographer who captured an image. More recently, (Pirsiavash et al., 2014) seeks to infer the motivation of the person in the image by mining knowledge stored in 677 a large corpus using natural language processing techniques. Different from these fairly general investigations about reasoning beyond appearance, our paper seeks to learn manipulation actions semantics in logic forms through CCG, and further infer hidden action consequences beyond appearance through reasoning. Action Recognition and Understanding: Human activity recognition and understanding has been studied heavily in Computer Vision recently, and there is a large range of applications for this work in areas like human-computer interactions, biometrics, and video surveillance. Both visual recognition methods, and the non-visual description methods using motion capture systems have been used. A few good surveys of the former can be found in (Moeslund et al., 2006) and (Turaga et al., 2008). Most of the focus has been on recognizing single human actions like walking, jumping, or running etc. (Ben-Arie et al., 2002; Yilmaz and Shah, 2005). Approaches to more complex actions have employed parametric approaches, such as HMMs (Kale et al., 2004) to learn the transition between feature representations in individual frames e.g. (Saisan et al., 2001; Chaudhry et al., 2009). More recently, (Aksoy et al., 2011; Aksoy et al., 2014) proposed a semantic event chain (SEC) representation to model and learn the semantic segment-wise relationship transition from spatial-temporal video segmentation. There also have been many syntactic approaches to human activity recognition which used the concept of context-free grammars, because such grammars provide a sound theoretical basis for modeling structured processes. Tracing back to the middle 90’s, (Brand, 1996) used a grammar to recognize disassembly tasks that contain hand manipulations. (Ryoo and Aggarwal, 2006) used the context-free grammar formalism to recognize composite human activities and multi-person interactions. It is a two level hierarchical approach where the lower-levels are composed of HMMs and Bayesian Networks while the higher-level interactions are modeled by CFGs. To deal with errors from low-level processes such as tracking, stochastic grammars such as stochastic CFGs were also used (Ivanov and Bobick, 2000; Moore and Essa, 2002). More recently, (Kuehne et al., 2014) proposed to model goal-directed human activities using Hidden Markov Models and treat subactions just like words in speech. These works proved that grammar based approaches are practical in activity recognition systems, and shed insight onto human manipulation action understanding. However, as mentioned, thinking about manipulation actions solely from the viewpoint of recognition has obvious limitations. In this work we adopt principles from CFG based activity recognition systems, with extensions to a CCG grammar that accommodates not only the hierarchical structure of human activity but also action semantics representations. It enables the system to serve as the core parsing engine for both manipulation action recognition and execution. Manipulation Action Grammar: As mentioned before, (Chomsky, 1993) suggested that a minimalist generative grammar, similar to the one of human language, also exists for action understanding and execution. The works closest related to this paper are (Pastra and Aloimonos, 2012; Summers-Stay et al., 2013; Guha et al., 2013). (Pastra and Aloimonos, 2012) first discussed a Chomskyan grammar for understanding complex actions as a theoretical concept, and (SummersStay et al., 2013) provided an implementation of such a grammar using as perceptual input only objects. More recently, (Yang et al., 2014) proposed a set of context-free grammar rules for manipulation action understanding, and (Yang et al., 2015) applied it on unconstrained instructional videos. However, these approaches only consider the syntactic structure of manipulation actions without coupling semantic rules using λ expressions, which limits the capability of doing reasoning and prediction. Combinatory Categorial Grammar and Semantic Parsing: CCG based semantic parsing originally was used mainly to translate natural language sentences to their desired semantic representations as λ-calculus formulas (Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007). (Mooney, 2008) presented a framework of grounded language acquisition: the interpretation of language entities into semantically informed structures in the context of perception and actuation. The concept has been applied successfully in tasks such as robot navigation (Matuszek et al., 2011), forklift operation (Tellex et al., 2014) and of human-robot interaction (Matuszek et al., 2014). In this work, instead of grounding natural language sentences directly, we ground information obtained from visual perception into seman678 tically informed structures, specifically in the domain of manipulation actions. 3 A CCG Framework for Manipulation Actions Before we dive into the semantic parsing of manipulation actions, a brief introduction to the Combinatory Categorial Grammar framework in Linguistics is necessary. We will only introduce related concepts and formalisms. For a complete background reading, we would like to refer readers to (Steedman, 2000). We will first give a brief introduction to CCG and then introduce a fundamental combinator, i.e., functional application. The introduction is followed by examples to show how the combinator is applied to parse actions. 3.1 Manipulation Action Semantics The semantic expression in our representation of manipulation actions uses a typed λ-calculus language. The formal system has two basic types: entities and functions. Entities in manipulation actions are Objects or Hands, and functions are the Actions. Our lambda-calculus expressions are formed from the following items: Constants: Constants can be either entities or functions. For example, Knife is an entity (i.e., it is of type N) and Cucumber is an entity too (i.e., it is of type N). Cut is an action function that maps entities to entities. When the event Knife Cut Cucumber happened, the expression cut(Knife, Cucumber) returns an entity of type AP, aka. Action Phrase. Constants like divided are status functions that map entities to truth values. The expression divided(cucumber) returns a true value after the event (Knife Cut Cucumber) happened. Logical connectors: The λ-calculus expression has logical connectors like conjunction (∧), disjunction (∨), negation(¬) and implication(→). For example, the expression connected(tomato, cucumber)∧ divided(tomato) ∧divided(cucumber) represents the joint status that the sliced tomato merged with the sliced cucumber. It can be regarded as a simplified goal status for “making a cucumber tomato salad”. The expression ¬connected(spoon, bowl) represents the status after the spoon finished stirring the bowl. λx.cut(x, cucumber) →divided(cucumber) represents that if the cucumber is cut by x, then the status of the cucumber is divided. λ expressions: lambda expressions represent functions with unknown arguments. For example, λx.cut(knife, x) is a function from entities to entities, which is of type NP after any entities of type N that is cut by knife. 3.2 Combinatory Categorial Grammar The semantic parsing formalism underlying our framework for manipulation actions is that of combinatory categorial grammar (CCG) (Steedman, 2000). A CCG specifies one or more logical forms for each element or combination of elements for manipulation actions. In our formalism, an element of Action is associated with a syntactic “category” which identifies it as functions, and specifies the type and directionality of their arguments and the type of their result. For example, action “Cut” is a function from patient object phrase (NP) on the right into predicates, and into functions from subject object phrase (NP) on the left into a sub action phrase (AP): Cut := (AP\NP)/NP. (1) As a matter of fact, the pure categorial grammar is a conext-free grammar presented in the accepting, rather than the producing direction. The expression (1) is just an accepting form for Action “Cut” following the context-free grammar. While it is now convenient to write derivations as follows, they are equivalent to conventional tree structure derivations in Figure. 3.2. Knife Cut Cucumber N N NP (AP\NP)/NP NP > AP\NP < AP AP AP NP N Cucumber A Cut NP N Knife Figure 2: Example of conventional tree structure. The semantic type is encoded in these categories, and their translation can be made explicit 679 in an expanded notation. Basically a λ-calculus expression is attached with the syntactic category. A colon operator is used to separate syntactical and semantic expressions, and the right side of the colon is assumed to have lower precedence than the left side of the colon. Which is intuitive as any explanation of manipulation actions should first obey syntactical rules, then semantic rules. Now the basic element, Action “Cut”, can be further represented by: Cut :=(AP\NP)/NP : λx.λy.cut(x, y) →divided(y). (AP\NP)/NP denotes a phrase of type AP, which requires an element of type NP to specify what object was cut, and requires another element of type NP to further complement what effector initiates the cut action. λx.λy.cut(x, y) is the λcalculus representation for this function. Since the functions are closely related to the state update, →divided(y) further points out the status expression after the action was performed. A CCG system has a set of combinatory rules which describe how adjacent syntatic categories in a string can be recursively combined. In the setting of manipulation actions, we want to point out that similar combinatory rules are also applicable. Especially the functional application rules are essential in our system. 3.3 Functional application The functional application rules with semantics can be expressed in the following form: A/B : f B : g => A : f(g) (2) B : g A\B : f => A : f(g) (3) Rule. (2) says that a string with type A/B can be combined with a right-adjacent string of type B to form a new string of type A. At the same time, it also specifies how the semantics of the category A can be compositionally built out of the semantics for A/B and B. Rule. (3) is a symmetric form of Rule. (2). In the domain of manipulation actions, following derivation is an example CCG parse. This parse shows how the system can parse an observation (“Knife Cut Cucumber”) into a semantic representation (cut(knife, cucumber) → divided(cucumber)) using the functional application rules. Knife Cut Cucumber N N NP (AP\NP)/NP NP knife λx.λy.cut(x, y) cucumber knife →divided(y) cucumber > AP\NP λx.cut(x, cucumber) →divided(cucumber) < AP cut(knife, cucumber) →divided(cucumber) 4 Learning Model and Semantic Parsing After having defined the formalism and application rule, instead of manually writing down all the possible CCG representations for each entity, we would like to apply a learning technique to derive them from the paired training corpus. Here we adopt the learning model of (Zettlemoyer and Collins, 2005), and use it to assign weights to the semantic representation of actions. Since an action may have multiple possible syntactic and semantic representations assigned to it, we use the probabilistic model to assign weights to these representations. 4.1 Learning Approach First we assume that complete syntactic parses of the observed action are available, and in fact a manipulation action can have several different parses. The parsing uses a probabilistic combinatorial categorial grammar framework similar to the one given by (Zettlemoyer and Collins, 2007). We assume a probabilistic categorial grammar (PCCG) based on a log linear model. M denotes a manipulation task, L denotes the semantic representation of the task, and T denotes its parse tree. The probability of a particular syntactic and semantic parse is given as: P(L, T|M; Θ) = ef(L,T,M)·Θ P (L,T) ef(L,T,M)·Θ (4) where f is a mapping of the triple (L, T, M) to feature vectors ∈Rd, and the Θ ∈Rd represents the weights to be learned. Here we use only lexical features, where each feature counts the number of times a lexical entry is used in T. Parsing a manipulation task under PCCG equates to finding L such that P(L|M; Θ) is maximized: argmaxLP(L|M; Θ) = argmaxL X T P(L, T|M; Θ). (5) 680 We use dynamic programming techniques to calculate the most probable parse for the manipulation task. In this paper, the implementation from (Baral et al., 2011) is adopted, where an inverse-λ technique is used to generalize new semantic representations. The generalization of lexicon rules are essential for our system to deal with unknown actions presented during the testing phase. 5 Experiments 5.1 Manipulation Action (MANIAC) Dataset (Aksoy et al., 2014) provides a manipulation action dataset with 8 different manipulation actions (cutting, chopping, stirring, putting, taking, hiding, uncovering, and pushing), each of which consists of 15 different versions performed by 5 different human actors1. There are in total 30 different objects manipulated in all demonstrations. All manipulations were recorded with the Microsoft Kinect sensor and serve as training data here. The MANIAC data set contains another 20 long and complex chained manipulation sequences (e.g. “making a sandwich”) which consist of a total of 103 different versions of these 8 manipulation tasks performed in different orders with novel objects under different circumstances. These serve as testing data for our experiments. (Aksoy et al., 2014; Aksoy and W¨org¨otter, 2015) developed a semantic event chain based model free decomposition approach. It is an unsupervised probabilistic method that measures the frequency of the changes in the spatial relations embedded in event chains, in order to extract the subject and patient visual segments. It also decomposes the long chained complex testing actions into their primitive action components according to the spatio-temporal relations of the manipulator. Since the visual recognition is not the core of this work, we omit the details here and refer the interested reader to (Aksoy et al., 2014; Aksoy and W¨org¨otter, 2015). All these features make the MANIAC dataset a great testing bed for both the theoretical framework and the implemented system presented in this work. 5.2 Training Corpus We first created a training corpus by annotating the 120 training clips from the MANIAC dataset, 1Dataset available for download at https: //fortknox.physik3.gwdg.de/cns/index. php?page=maniac-dataset. in the format of observed triplets (subject action patient) and a corresponding semantic representation of the action as well as its consequence. The semantic representations in λ-calculus format are given by human annotators after watching each action clip. A set of sample training pairs are given in Table.1 (one from each action category in the training set). Since every training clip contains one single full execution of each manipulation action considered, the training corpus thus has a total of 120 paired training samples. Snapshot triplet semantic representation cleaver chopping carrot chopping(cleaver, carrot) →divided(carrot) spatula cutting pepper cutting(spatula, pepper) →divided(pepper) spoon stirring bucket stirring(spoon, bucket) cup take down bucket take down(cup, bucket) →¬connected(cup, bucket) ∧moved(cup) cup put on top bowl put on top(cup, bowl) →on top(cup, bowl) ∧moved(cup) bucket hiding ball hiding(bucket, ball) →contained(bucket, ball) ∧moved(bucket) hand pushing box pushing(hand, box) →moved(box) box uncover apple uncover(box, apple) →appear(apple) ∧moved(box) Table 1: Example annotations from training corpus, one per manipulation action category. We also assume the system knows that every “object” involved in the corpus is an entity of its own type, for example: Knife := N : knife Bowl := N : bowl ...... Additionally, we assume the syntactic form of each “action” has a main type (AP\NP)/NP (see Sec. 3.2). These two sets of rules form the initial seed lexicon for learning. 5.3 Learned Lexicon We applied the learning technique mentioned in Sec. 4, and we used the NL2KR implementation from (Baral et al., 2011). The system learns and generalizes a set of lexicon entries (syntactic and semantic) for each action categories from the training corpus accompanied with a set of weights. 681 We list the one with the largest weight for each action here respectively: Chopping :=(AP\NP)/NP : λx.λy.chopping(x, y) →divided(y) Cutting :=(AP\NP)/NP : λx.λy.cutting(x, y) →divided(y) Stirring :=(AP\NP)/NP : λx.λy.stirring(x, y) Take down :=(AP\NP)/NP : λx.λy.take down(x, y) →¬connected(x, y) ∧moved(x) Put on top :=(AP\NP)/NP : λx.λy.put on top(x, y) →on top(x, y) ∧moved(x) Hiding :=(AP\NP)/NP : λx.λy.hiding(x, y) →contained(x, y) ∧moved(x) Pushing :=(AP\NP)/NP : λx.λy.pushing(x, y) →moved(y) Uncover :=(AP\NP)/NP : λx.λy.uncover(x, y) →appear(y) ∧moved(x). The set of seed lexicon and the learned lexicon entries are further used to probabilistically parse the detected triplet sequences from the 20 long manipulation activities in the testing set. 5.4 Deducing Semantics Using the decomposition technique from (Aksoy et al., 2014; Aksoy and W¨org¨otter, 2015), the reported system is able to detect a sequence of action triplets in the form of (Subject Action Patient) from each of the testing sequence in MANIAC dataset. Briefly speaking, the event chain representation (Aksoy et al., 2011) of the observed long manipulation activity is first scanned to estimate the main manipulator, i.e. the hand, and manipulated objects, e.g. knife, in the scene without employing any visual feature-based object recognition method. Solely based on the interactions between the hand and manipulated objects in the scene, the event chain is partitioned into chunks. These chunks are further fragmented into subunits to detect parallel action streams. Each parsed Semantic Event Chain (SEC) chunk is then compared with the model SECs in the library to decide whether the current SEC sample belongs to one of the known manipulation models or represents a novel manipulation. SEC models, stored in the library, are learned in an on-line unsupervised fashion using the semantics of manipulations derived from a given set of training data in order to create a large vocabulary of single atomic manipulations. For the different testing sequence, the number of triplets detected ranges from two to seven. In total, we are able to collect 90 testing detections and they serve as the testing corpus. However, since many of the objects used in the testing data are not present in the training set, an object model-free approach is adopted and thus “subject” and “patient” fields are filled with segment IDs instead of a specific object name. Fig. 3 and 4 show several examples of the detected triplets accompanied with a set of key frames from the testing sequences. Nevertheless, the method we used here can 1) generalize the unknown segments into the category of object entities and 2) generalize the unknown actions (those that do not exist in the training corpus) into the category of action function. This is done by automatically generalizing the following two types of lexicon entries using the inverse-λ technique from (Baral et al., 2011): Object [ID] :=N : object [ID] Unknown :=(AP\NP)/NP : λx.λy.unknown(x, y) Among the 90 detected triplets, using the learned lexicon we are able to parse all of them into semantic representations. Here we pick the representation with the highest probability after parsing as the individual action semantic representation. The “parsed semantics” rows of Fig. 3 and 4 show several example action semantics on testing sequences. Taking the fourth sub-action from Fig. 4 as an example, the visually detected triplets based on segmentation and spatial decomposition is (Object 014, Chopping, Object 011). After semantic parsing, the system predicts that divided(Object 011). The complete training corpus and parsed results of the testing set will be made publicly available for future research. 5.5 Reasoning Beyond Observations As mentioned before, because of the use of λcalculus for representing action semantics, the obtained data can naturally be used to do logical reasoning beyond observations. This by itself is a very interesting research topic and it is beyond this paper’s scope. However by applying a couple of common sense Axioms on the testing data, we can provide some flavor of this idea. Case study one: See the “final action consequence and reasoning” row of Fig. 3 for case one. Using propositional logic and axiom schema, we can represent the common sense statement (“if an object x is contained in object y, and object z is on top of object y, then object z is on top of object x”) as follows: 682 Figure 3: System output on complex chained manipulation testing sequence one. The segmentation output and detected triplets are from (Aksoy and W¨org¨otter, 2015) . Figure 4: System output on the 18th complex chained manipulation testing sequence. The segmentation output and detected triplets are from (Aksoy and W¨org¨otter, 2015) . Axiom (1): ∃x, y, z, contained(y, x) ∧ on top(z, y) →on top(z, x). Then it is trivial to deduce an additional final action consequence in this scenario that (on top(object 007, object 009)). This matches the fact: the yellow box which is put on top of the red bucket is also on top of the black ball. Case study two: See the “final action consequence and reasoning” row of Fig. 4 for a more complicated case. Using propositional logic and axiom schema, we can represent three common sense statements: 1) “if an object y is contained in object x, and object z is contained in object y, then object z is contained in object x”; 2) “if an object x is contained in object y, and object y is divided, then object x is divided”; 3) “if an object x is contained in object y, and object y is on top of object z, then object x is on top of object z” as follows: Axiom (2): ∃x, y, z, contained(y, x) ∧ contained(z, y) →contained(z, x). Axiom (3): ∃x, y, contained(y, x) ∧ divided(y) →divided(x). Axiom (4): ∃x, y, z, contained(y, x) ∧ on top(y, z) →on top(x, z). With these common sense Axioms, the system is able to deduce several additional final action consequences in this scenario: divided(object 005) ∧divided(object 010) ∧on top(object 005, object 012) ∧on top(object 010, object 012). From Fig. 4, we can see that these additional consequences indeed match the facts: 1) the bread and cheese which are covered by ham are also divided, even though from observation the system only detected the ham being cut; 2) the divided bread and cheese are also on top of the plate, even though from observation the system only detected the ham being put on top of the plate. 683 We applied the four Axioms on the 20 testing action sequences and deduced the “hidden” consequences from observation. To evaluate our system performance quantitatively, we first annotated all the final action consequences (both obvious and “hidden” ones) from the 20 testing sequences as ground-truth facts. In total there are 122 consequences annotated. Using perception only (Aksoy and W¨org¨otter, 2015), due to the decomposition errors (such as the red font ones in Fig. 4) the system can detect 91 consequences correctly, yielding a 74% correct rate. After applying the four Axioms and reasoning, our system is able to detect 105 consequences correctly, yielding a 86% correct rate. Overall, this is a 15.4% of improvement. Here we want to mention a caveat: there are definitely other common sense Axioms that we are not able to address in the current implementation. However, from the case studies presented, we can see that using the presented formal framework, our system is able to reason about manipulation action goals instead of just observing what is happening visually. This capability is essential for intelligent agents to imitate action goals from observation. 6 Conclusion and Future Work In this paper we presented a formal computational framework for modeling manipulation actions based on a Combinatory Categorial Grammar. An empirical study on a large manipulation action dataset validates that 1) with the introduced formalism, a learning system can be devised to deduce the semantic meaning of manipulation actions in λ-schema; 2) with the learned schema and several common sense Axioms, our system is able to reason beyond just observation and deduce “hidden” action consequences, yielding a decent performance improvement. Due to the limitation of current testing scenarios, we conducted experiments only considering a relatively small set of seed lexicon rules and logical expressions. Nevertheless, we want to mention that the presented CCG framework can also be extended to learn the formal logic representation of more complex manipulation action semantics. For example, the temporal order of manipulation actions can be modeled by considering a seed rule such as AP\AP : λf.λg.before(f(·), g(·)), where before(·, ·) is a temporal predicate. For actions in this paper we consider seed main type (AP\NP)/NP. For more general manipulation scenarios, based on whether the action is transitive or intransitive, the main types of action can be extended to include AP\NP. Moreover, the logical expressions can also be extended to include universal quantification ∀and existential quantification ∃. Thus, manipulation action such as “knife cut every tomato” can be parsed into a representation as ∀x.tomato(x) ∧ cut(knife, x) →divided(x) (the parse is given in the following chart). Here, the concept “every” has a main type of NP\NP and semantic meaning of ∀x.f(x). The same framework can also extended to have other combinatory rules such as composition and type-raising (Steedman, 2002). These are parts of the future work along the line of the presented work. Knife Cut every Tomato N N NP (AP\NP)/NP NP\NP NP knife λx.λy.cut(x, y) ∀x.f (x) tomato knife →divided(y) ∀x.f (x) tomato > NP ∀x.tomato(x) > AP\NP ∀y.λx.tomato(y) ∧cut(x, y) →divided(y) < AP ∀y.tomato(y) ∧cut(knife, y) →divided(y) The presented computational linguistic framework enables an intelligent agent to predict and reason action goals from observation, and thus has many potential applications such as human intention prediction, robot action policy planning, human robot collaboration etc. We believe that our formalism of manipulation actions bridges computational linguistics, vision and robotics, and opens further research in Artificial Intelligence and Robotics. As the robotics industry is moving towards robots that function safely, effectively and autonomously to perform tasks in real-world unstructured environments, they will need to be able to understand the meaning of actions and acquire human-like common-sense reasoning capabilities. 7 Acknowledgements This research was funded in part by the support of the European Union under the Cognitive Systems program (project POETICON++), the National Science Foundation under INSPIRE grant SMA 1248056, and by DARPA through U.S. Army grant W911NF-14-1-0384 under the Project: Shared Perception, Cognition and Reasoning for Autonomy. 684 References E E. Aksoy and F. W¨org¨otter. 2015. Semantic decomposition and recognition of long and complex manipulation action sequences. International Journal of Computer Vision, page Under Review. E.E. Aksoy, A. Abramov, J. D¨orr, K. Ning, B. Dellen, and F. W¨org¨otter. 2011. Learning the semantics of object–action relations by observation. The International Journal of Robotics Research, 30(10):1229– 1249. E E. Aksoy, M. Tamosiunaite, and F. W¨org¨otter. 2014. Model-free incremental learning of the semantics of manipulation actions. Robotics and Autonomous Systems, pages 1–42. Chitta Baral, Juraj Dzifcak, Marcos Alvarez Gonzalez, and Jiayu Zhou. 2011. Using inverse λ and generalization to translate english to formal languages. In Proceedings of the Ninth International Conference on Computational Semantics, pages 35–44. Association for Computational Linguistics. Jezekiel Ben-Arie, Zhiqian Wang, Purvin Pandit, and Shyamsundar Rajaram. 2002. Human activity recognition using multidimensional indexing. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 24(8):1091–1104. Matthew Brand. 1996. Understanding manipulation in video. In Proceedings of the Second International Conference on Automatic Face and Gesture Recognition, pages 94–99, Killington,VT. IEEE. R. Chaudhry, A. Ravichandran, G. Hager, and R. Vidal. 2009. Histograms of oriented optical flow and binetcauchy kernels on nonlinear dynamical systems for the recognition of human actions. In Proceedings of the 2009 IEEE Intenational Conference on Computer Vision and Pattern Recognition, pages 1932– 1939, Miami,FL. IEEE. N. Chomsky. 1957. Syntactic Structures. Mouton de Gruyter. Noam Chomsky. 1993. Lectures on government and binding: The Pisa lectures. Walter de Gruyter. V Gazzola, G Rizzolatti, B Wicker, and C Keysers. 2007. The anthropomorphic brain: the mirror neuron system responds to human and robotic actions. Neuroimage, 35(4):1674–1684. Anupam Guha, Yezhou Yang, Cornelia Ferm¨uller, and Yiannis Aloimonos. 2013. Minimalist plans for interpreting manipulation actions. Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 5908–5914. Yuri A. Ivanov and Aaron F. Bobick. 2000. Recognition of visual activities and interactions by stochastic parsing. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):852–872. Jungseock Joo, Weixin Li, Francis F Steen, and SongChun Zhu. 2014. Visual persuasion: Inferring communicative intents of images. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 216–223. IEEE. A. Kale, A. Sundaresan, AN Rajagopalan, N.P. Cuntoor, A.K. Roy-Chowdhury, V. Kruger, and R. Chellappa. 2004. Identification of humans using gait. IEEE Transactions on Image Processing, 13(9):1163–1173. Hilde Kuehne, Ali Arslan, and Thomas Serre. 2014. The language of actions: Recovering the syntax and semantics of goal-directed human activities. In Computer Vision and Pattern Recognition (CVPR), 2014 IEEE Conference on, pages 780–787. IEEE. Cynthia Matuszek, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2011. A joint model of language and perception for grounded attribute learning. In International Conference on Machine learning (ICML). Cynthia Matuszek, Liefeng Bo, Luke Zettlemoyer, and Dieter Fox. 2014. Learning from unscripted deictic gesture and language for human-robot interactions. In Twenty-Eighth AAAI Conference on Artificial Intelligence. T.B. Moeslund, A. Hilton, and V. Kr¨uger. 2006. A survey of advances in vision-based human motion capture and analysis. Computer vision and image understanding, 104(2):90–126. Raymond J Mooney. 2008. Learning to connect language and perception. In AAAI, pages 1598–1601. Darnell Moore and Irfan Essa. 2002. Recognizing multitasked activities from video using stochastic context-free grammar. In Proceedings of the National Conference on Artificial Intelligence, pages 770–776, Menlo Park, CA. AAAI. K. Pastra and Y. Aloimonos. 2012. The minimalist grammar of action. Philosophical Transactions of the Royal Society B: Biological Sciences, 367(1585):103–117. Hamed Pirsiavash, Carl Vondrick, and Antonio Torralba. 2014. Inferring the why in images. arXiv preprint arXiv:1406.5472. Giacomo Rizzolatti, Leonardo Fogassi, and Vittorio Gallese. 2001. Neurophysiological mechanisms underlying the understanding and imitation of action. Nature Reviews Neuroscience, 2(9):661–670. Michael S Ryoo and Jake K Aggarwal. 2006. Recognition of composite human activities through contextfree grammar based representation. In Proceedings of the 2006 IEEE Conference on Computer Vision and Pattern Recognition, volume 2, pages 1709– 1718, New York City, NY. IEEE. 685 P. Saisan, G. Doretto, Y.N. Wu, and S. Soatto. 2001. Dynamic texture recognition. In Proceedings of the 2001 IEEE Intenational Conference on Computer Vision and Pattern Recognition, volume 2, pages 58–63, Kauai, HI. IEEE. Mark Steedman. 2000. The syntactic process, volume 35. MIT Press. Mark Steedman. 2002. Plans, affordances, and combinatory grammar. Linguistics and Philosophy, 25(56):723–753. D. Summers-Stay, C.L. Teo, Y. Yang, C. Ferm¨uller, and Y. Aloimonos. 2013. Using a minimal action grammar for activity understanding in the real world. In Proceedings of the 2013 IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 4104–4111, Vilamoura, Portugal. IEEE. Stefanie Tellex, Pratiksha Thaker, Joshua Joseph, and Nicholas Roy. 2014. Learning perceptually grounded word meanings from unaligned parallel data. Machine Learning, 94(2):151–167. P. Turaga, R. Chellappa, V.S. Subrahmanian, and O. Udrea. 2008. Machine recognition of human activities: A survey. IEEE Transactions on Circuits and Systems for Video Technology, 18(11):1473– 1488. Dan Xie, Sinisa Todorovic, and Song-Chun Zhu. 2013. Inferring “dark matter” and “dark energy” from videos. In Computer Vision (ICCV), 2013 IEEE International Conference on, pages 2224–2231. IEEE. Yezhou Yang, Cornelia Ferm¨uller, and Yiannis Aloimonos. 2013. Detection of manipulation action consequences (MAC). In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, pages 2563–2570, Portland, OR. IEEE. Y. Yang, A. Guha, C. Fermuller, and Y. Aloimonos. 2014. A cognitive system for understanding human manipulation actions. Advances in Cognitive Sysytems, 3:67–86. Yezhou Yang, Yi Li, Cornelia Fermuller, and Yiannis Aloimonos. 2015. Robot learning manipulation action plans by “watching” unconstrained videos from the world wide web. In The Twenty-Ninth AAAI Conference on Artificial Intelligence (AAAI-15). A. Yilmaz and M. Shah. 2005. Actions sketch: A novel action representation. In Proceedings of the 2005 IEEE Intenational Conference on Computer Vision and Pattern Recognition, volume 1, pages 984–989, San Diego, CA. IEEE. Luke S Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In UAI. Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed ccg grammars for parsing to logical form. In EMNLP-CoNLL, pages 678–687. 686
2015
66
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 687–696, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Knowledge Graph Embedding via Dynamic Mapping Matrix Guoliang Ji, Shizhu He, Liheng Xu, Kang Liu and Jun Zhao National Laboratory of Pattern Recognition (NLPR) Institute of Automation Chinese Academy of Sciences, Beijing, 100190, China {guoliang.ji,shizhu.he,lhxu,kliu,jzhao}@nlpr.ia.ac.cn Abstract Knowledge graphs are useful resources for numerous AI applications, but they are far from completeness. Previous work such as TransE, TransH and TransR/CTransR regard a relation as translation from head entity to tail entity and the CTransR achieves state-of-the-art performance. In this paper, we propose a more fine-grained model named TransD, which is an improvement of TransR/CTransR. In TransD, we use two vectors to represent a named symbol object (entity and relation). The first one represents the meaning of a(n) entity (relation), the other one is used to construct mapping matrix dynamically. Compared with TransR/CTransR, TransD not only considers the diversity of relations, but also entities. TransD has less parameters and has no matrix-vector multiplication operations, which makes it can be applied on large scale graphs. In Experiments, we evaluate our model on two typical tasks including triplets classification and link prediction. Evaluation results show that our approach outperforms stateof-the-art methods. 1 Introduction Knowledge Graphs such as WordNet (Miller 1995), Freebase (Bollacker et al. 2008) and Yago (Suchanek et al. 2007) have been playing a pivotal role in many AI applications, such as relation extraction(RE), question answering(Q&A), etc. They usually contain huge amounts of structured data as the form of triplets (head entity, relation, tail entity)(denoted as (h, r, t)), where relation models the relationship between the two entities. As most knowledge graphs have been built either collaboratively or (partly) automatically, they often suffer from incompleteness. Knowledge graph completion is to predict relations between entities based on existing triplets in a knowledge graph. In the past decade, much work based on symbol and logic has been done for knowledge graph completion, but they are neither tractable nor enough convergence for large scale knowledge graphs. Recently, a powerful approach for this task is to encode every element (entities and relations) of a knowledge graph into a low-dimensional embedding vector space. These methods do reasoning over knowledge graphs through algebraic operations (see section ”Related Work”). Among these methods, TransE (Bordes et al. 2013) is simple and effective, and also achieves state-of-the-art prediction performance. It learns low-dimensional embeddings for every entity and relation in knowledge graphs. These vector embeddings are denoted by the same letter in boldface. The basic idea is that every relation is regarded as translation in the embedding space. For a golden triplet (h, r, t), the embedding h is close to the embedding t by adding the embedding r, that is h + r ≈t. TransE is suitable for 1-to-1 relations, but has flaws when dealing with 1-toN, N-to-1 and N-to-N relations. TransH (Wang et al. 2014) is proposed to solve these issues. TransH regards a relation as a translating operation on a relation-specific hyperplane, which is characterized by a norm vector wr and a translation vector dr. The embeddings h and t are first projected to the hyperplane of relation r to obtain vectors h⊥= h −w⊤ r hwr and t⊥= t −w⊤ r twr, and then h⊥+ dr ≈t⊥. Both in TransE and TransH, the embeddings of entities and relations are in the same space. However, entities and relations are different types objects, it is insufficient to model them in the same space. TransR/CTransR (Lin et al. 2015) set a mapping matrix Mr and a vector r for every relation r. In TransR, h and t are projected to the aspects that relation r focuses on through the ma687 Entity Space Relation Space r 1 h 1t 2 h 2t 3 h 3t i i m n rh p i p m n rt p i p × × = + = + M r h I M r t I ⊤ ⊤ ( ) 1,2,3 i = r r r r 1⊥ h 2⊥ h 3⊥ h 1⊥ t 2⊥ t 3⊥ t Figure 1: Simple illustration of TransD. Each shape represents an entity pair appearing in a triplet of relation r. Mrh and Mrt are mapping matrices of h and t, respectively. hip, tip(i = 1, 2, 3), and rp are projection vectors. hi⊥and ti⊥(i = 1, 2, 3) are projected vectors of entities. The projected vectors satisfy hi⊥+ r ≈ti⊥(i = 1, 2, 3). trix Mr and then Mrh + r ≈Mrt. CTransR is an extension of TransR by clustering diverse headtail entity pairs into groups and learning distinct relation vectors for each group. TransR/CTransR has significant improvements compared with previous state-of-the-art models. However, it also has several flaws: (1) For a typical relation r, all entities share the same mapping matrix Mr. However, the entities linked by a relation always contains various types and attributes. For example, in triplet (friedrich burklein, nationality, germany), friedrich burklein and germany are typical different types of entities. These entities should be projected in different ways; (2) The projection operation is an interactive process between an entity and a relation, it is unreasonable that the mapping matrices are determined only by relations; and (3) Matrix-vector multiplication makes it has large amount of calculation, and when relation number is large, it also has much more parameters than TransE and TransH. As the complexity, TransR/CTransR is difficult to apply on largescale knowledge graphs. In this paper, we propose a novel method named TransD to model knowledge graphs. Figure 1 shows the basic idea of TransD. In TransD, we define two vectors for each entity and relation. The first vector represents the meaning of an entity or a relation, the other one (called projection vector) represents the way that how to project a entity embedding into a relation vector space and it will be used to construct mapping matrices. Therefore, every entity-relation pair has an unique mapping matrix. In addition, TransD has no matrixby-vector operations which can be replaced by vectors operations. We evaluate TransD with the task of triplets classification and link prediction. The experimental results show that our method has significant improvements compared with previous models. Our contributions in this paper are: (1)We propose a novel model TransD, which constructs a dynamic mapping matrix for each entity-relation pair by considering the diversity of entities and relations simultaneously. It provides a flexible style to project entity representations to relation vector space; (2) Compared with TransR/CTransR, TransD has fewer parameters and has no matrixvector multiplication. It is easy to be applied on large-scale knowledge graphs like TransE and TransH; and (3) In experiments, our approach outperforms previous models including TransE, TransH and TransR/CTransR in link prediction and triplets classification tasks. 2 Related Work Before proceeding, we define our mathematical notations. We denote a triplet by (h, r, t) and their column vectors by bold lower case letters h, r, t; matrices by bold upper case letters, such as M; tensors by bold upper case letters with a hat, such as c M. Score function is represented by fr(h, t). For a golden triplet (h, r, t) that corresponds to a true fact in real world, it always get a relatively higher score, and lower for an negative triplet. Other notations will be described in the appropriate sections. 2.1 TransE, TransH and TransR/CTransR As mentioned in Introduction section, TransE (Bordes et al. 2013) regards the relation r as translation from h to t for a golden triplet (h, r, t). Hence, (h+r) is close to (t) and the score function is fr(h, t) = −∥h + r −t∥2 2. (1) TransE is only suitable for 1-to-1 relations, there remain flaws for 1-to-N, N-to-1 and N-to-N relations. To solve these problems, TransH (Wang et al. 2014) proposes an improved model named translation on a hyperplane. On hyperplanes of different relations, a given entity has different representations. Similar to TransE, TransH has the score function as follows: fr(h, t) = −∥h⊥+ r −t⊥∥2 2. (2) 688 Model #Parameters # Operations (Time complexity) Unstructured (Bordes et al. 2012; 2014) O(Nem) O(Nt) SE (Bordes et al. 2011) O(Nem + 2Nrn2)(m = n) O(2m2Nt) SME(linear) (Bordes et al. 2012; 2014) O(Nem + Nrn + 4mk + 4k)(m = n) O(4mkNt) SME (bilinear) (Bordes et al. 2012; 2014) O(Nem + Nrn + 4mks + 4k)(m = n) O(4mksNt) LFM (Jenatton et al. 2012; Sutskever et al. 2009) O(Nem + Nrn2)(m = n) O((m2 + m)Nt) SLM (Socher et al. 2013) O(Nem + Nr(2k + 2nk))(m = n) O((2mk + k)Nt) NTN (Socher et al. 2013) O(Nem + Nr(n2s + 2ns + 2s))(m = n) O(((m2 + m)s + 2mk + k)Nt) TransE (Bordes et al. 2013) O(Nem + Nrn)(m = n) O(Nt) TransH (Wang et al. 2014) O(Nem + 2Nrn)(m = n) O(2mNt) TransR (Lin et al. 2015) O(Nem + Nr(m + 1)n) O(2mnNt) CTransR (Lin et al. 2015) O(Nem + Nr(m + d)n) O(2mnNt) TransD (this paper) O(2Nem + 2Nrn) O(2nNt) Table 1: Complexity (the number of parameters and the number of multiplication operations in an epoch) of several embedding models. Ne and Nr represent the number of entities and relations, respectively. Nt represents the number of triplets in a knowledge graph. m is the dimension of entity embedding space and n is the dimension of relation embedding space. d denotes the average number of clusters of a relation. k is the number of hidden nodes of a neural network and s is the number of slice of a tensor. In order to ensure that h⊥and t⊥are on the hyperplane of r, TransH restricts ∥wr∥= 1. Both TransE and TransH assume that entities and relations are in the same vector space. But relations and entities are different types of objects, they should not be in the same vector space. TransR/CTransR (Lin et al. 2015) is proposed based on the idea. TransR set a mapping matrix Mr for each relation r to map entity embedding into relation vector space. Its score function is: fr(h, t) = −∥Mrh + r −Mrt∥2 2. (3) where Mr ∈Rm×n, h, t ∈Rn and r ∈Rm. CTransR is an extension of TransR. As head-tail entity pairs present various patterns in different relations, CTransR clusters diverse head-tail entity pairs into groups and sets a relation vector for each group. 2.2 Other Models Unstructured. Unstructured model (Bordes et al. 2012; 2014) ignores relations, only models entities as embeddings. The score function is fr(h, t) = −∥h −t∥2 2. (4) It’s a simple case of TransE. Obviously, Unstructured model can not distinguish different relations. Structured Embedding (SE). SE model (Bordes et al. 2011) sets two separate matrices Mrh and Mrt to project head and tail entities for each relation. Its score function is defined as follows: fr(h, t) = −∥Mrhh −Mrtt∥1 (5) Semantic Matching Energy (SME). SME model (Bordes et al. 2012; 2014) encodes each named symbolic object (entities and relations) as a vector. Its score function is a neural network that captures correlations between entities and relations via matrix operations. Parameters of the neural network are shared by all relations. SME defines two semantic matching energy functions for optimization, a linear form gη = Mη1eη + Mη2r + bη (6) and a bilinear form gη = (Mη1eη) ⊗(Mη2r) + bη (7) where η = {left, right}, eleft = h, eright = t and ⊗is the Hadamard product. The score function is fr(h, t) = gleft⊤gright (8) In (Bordes et al.2014), matrices of the bilinear form are replaced by tensors. Latent Factor Model (LFM). LFM model (Jenatton et al. 2012; Sutskever et al. 2009) encodes each entity into a vector and sets a matrix for every relation. It defines a score function fr(h, t) = h⊤Mrt, which incorporates the interaction of the two entity vectors in a simple and effecitve way. Single Layer Model (SLM). SLM model is designed as a baseline of Neural Tensor Network (Socher et al. 2013). The model constructs a nonlinear neural network to represent the score function defined as follows. fr(h, t) = u⊤ r f(Mr1h + Mr2t + br) (9) where Mr1, Mr2 and br are parameters indexed by relation r, f() is tanh operation. 689 Neural Tensor Network (NTN). NTN model (Socher et al. 2013) extends SLM model by considering the second-order correlations into nonlinear neural networks. The score function is fr(h, t) = u⊤ r f(h⊤c Wrt + Mr  h t  + br) (10) where c Wr represents a 3-way tensor, Mr denotes the weight matrix, br is the bias and f() is tanh operation. NTN is the most expressive model so far, but it has so many parameters that it is difficult to scale up to large knowledge graphs. Table 1 lists the complexity of all the above models. The complexity (especially for time) of TransD is much less than TransR/CTransR and is similar to TransE and TransH. Therefore, TransD is effective and train faster than TransR/CTransR. Beyond these embedding models, there is other related work of modeling multi-relational data, such as matrix factorization, recommendations, etc. In experiments, we refer to the results of RESCAL presented in (Lin et al. 2015) and compare with it. 3 Our Method We first define notations. Triplets are represented as (hi, ri, ti)(i = 1, 2, . . . , nt), where hi denotes a head entity, ti denotes a tail entity and ri denotes a relation. Their embeddings are denoted by hi, ri, ti(i = 1, 2, . . . , nt). We use ∆to represent golden triplets set, and use ∆ ′ to denote negative triplets set. Entities set and relations set are denoted by E and R, respectively. We use Im×n to denote the identity matrix of size m × n. 3.1 Multiple Types of Entities and Relations Considering the diversity of relations, CTransR segments triplets of a specific relation r into several groups and learns a vector representation for each group. However, entities also have various types. Figure 2 shows several kinds of head and tail entities of relation location.location.partially containedby in FB15k. In both TransH and TransR/CTransR, all types of entities share the same mapping vectors/matrices. However, different types of entities have different attributes and functions, it is insufficient to let them share the same transform parameters of a relation. And for a given relation, similar entities should have similar mapping matrices and otherwise for dissimilar entities. Furthermore, the mapping process is a transaction between entities and relations that both have various types. Therefore, we propose a more fine-grained model TransD, which considers different types of both entities and relations, to encode knowledge graphs into embedding vectors via dynamic mapping matrices produced by projection vectors. Figure 2: Multiple types of entities of relation location.location.partially containedby. 3.2 TransD Model In TransD, each named symbol object (entities and relations) is represented by two vectors. The first one captures the meaning of entity (relation), the other one is used to construct mapping matrices. For example, given a triplet (h, r, t), its vectors are h, hp, r, rp, t, tp, where subscript p marks the projection vectors, h, hp, t, tp ∈Rn and r, rp ∈Rm. For each triplet (h, r, t), we set two mapping matrices Mrh, Mrt ∈Rm×n to project entities from entity space to relation space. They are defined as follows: Mrh = rph⊤ p + Im×n (11) Mrt = rpt⊤ p + Im×n (12) Therefore, the mapping matrices are determined by both entities and relations, and this kind of operation makes the two projection vectors interact sufficiently because each element of them can meet every entry comes from another vector. As we initialize each mapping matrix with an identity matrix, we add the Im×n to Mrh and Mrh. With the mapping matrices, we define the projected vectors as follows: h⊥= Mrhh, t⊥= Mrtt (13) 690 Then the score function is fr(h, t) = −∥h⊥+ r −t⊥∥2 2 (14) In experiments, we enforce constrains as ∥h∥2 ≤ 1, ∥t∥2 ≤1, ∥r∥2 ≤1, ∥h⊥∥2 ≤1 and ∥t⊥∥2 ≤ 1. Training Objective We assume that there are nt triplets in training set and denote the ith triplet by (hi, ri, ti)(i = 1, 2, . . . , nt). Each triplet has a label yi to indicate the triplet is positive (yi = 1) or negative (yi = 0). Then the golden and negative triplets are denoted by ∆= {(hj, rj, tj) | yj = 1} and ∆ ′ = {(hj, rj, tj) | yj = 0}, respectively. Before training, one important trouble is that knowledge graphs only encode positive training triplets, they do not contain negative examples. Therefore, we obtain ∆from knowledge graphs and generate ∆ ′ as follows: ∆ ′ = {(hl, rk, tk) | hl ̸= hk ∧yk = 1} ∪{(hk, rk, tl) | tl ̸= tk ∧yk = 1}. We also use two strategies “unif” and “bern” described in (Wang et al. 2014) to replace the head or tail entity. Let us use ξ and ξ ′ to denote a golden triplet and a corresponding negative triplet, respectively. Then we define the following margin-based ranking loss as the objective for training: L = X ξ∈∆ X ξ′∈∆′ [γ + fr(ξ ′) −fr(ξ)]+ (15) where [x]+ ≜max (0, x), and γ is the margin separating golden triplets and negative triplets. The process of minimizing the above objective is carried out with stochastic gradient descent (SGD). In order to speed up the convergence and avoid overfitting, we initiate the entity and relation embeddings with the results of TransE and initiate all the transfer matrices with identity matrices. 3.3 Connections with TransE, TransH and TransR/CTransR TransE is a special case of TransD when the dimension of vectors satisfies m = n and all projection vectors are set zero. TransH is related to TransD when we set m = n. Under the setting, projected vectors of entities can be rewritten as follows: h⊥= Mrhh = h + h⊤ p hrp (16) t⊥= Mrtt = t + t⊤ p trp (17) Hence, when m = n, the difference between TransD and TransH is that projection vectors are determinded only by relations in TransH, but TransD’s projection vectors are determinded by both entities and relations. As to TransR/CTransR, TransD is an improvement of it. TransR/CTransR directly defines a mapping matrix for each relation, TransD consturcts two mapping matrices dynamically for each triplet by setting a projection vector for each entity and relation. In addition, TransD has no matrix-vector multiplication operation which can be replaced by vector operations. Without loss of generality, we assume m ≥n, the projected vectors can be computed as follows: h⊥= Mrhh = h⊤ p hrp +  h⊤, 0⊤⊤(18) t⊥= Mrtt = t⊤ p trp +  t⊤, 0⊤⊤ (19) Therefore, TransD has less calculation than TransR/CTransR, which makes it train faster and can be applied on large-scale knowledge graphs. 4 Experiments and Results Analysis We evaluate our apporach on two tasks: triplets classification and link prediction. Then we show the experiments results and some analysis of them. 4.1 Data Sets Triplets classification and link prediction are implemented on two popular knowledge graphs: WordNet (Miller 1995) and Freebase (Bollacker et al. 2008). WordNet is a large lexical knowledge graph. Entities in WordNet are synonyms which express distinct concepts. Relations in WordNet are conceptual-semantic and lexical relations. In this paper, we use two subsets of WordNet: WN11 (Socher et al. 2013) and WN18 (Bordes et al. 2014). Freebase is a large collaborative knowledge base consists of a large number of the world facts, such as triplets (anthony asquith, location, london) and (nobuko otowa, profession, actor). We also use two subsets of Freebase: FB15k (Bordes et al. 2014) and FB13 (Socher et al. 2013). Table 2 lists statistics of the 4 datasets. Dataset #Rel #Ent #Train #Valid #Test WN11 11 38,696 112,581 2,609 10,544 WN18 18 40,943 141,442 5,000 5,000 FB13 13 75,043 316,232 5908 23,733 FB15k 1,345 14,951 483,142 50,000 59,071 Table 2: Datesets used in the experiments. 691 4.2 Triplets Classification Triplets classification aims to judge whether a given triplet (h, r, t) is correct or not, which is a binary classification task. Previous work (Socher et al. 2013; Wang et al. 2014; Lin et al. 2015) had explored this task. In this paper ,we use three datasets WN11, FB13 and FB15k to evaluate our approach. The test sets of WN11 and FB13 provided by (Socher et al. 2013) contain golden and negative triplets. As to FB15k, its test set only contains correct triplets, which requires us to construct negative triplets. In this parper, we construct negative triplets following the same setting used for FB13 (Socher et al. 2013). For triplets classification, we set a threshold δr for each relation r. δr is obtained by maximizing the classification accuracies on the valid set. For a given triplet (h, r, t), if its score is larger than δr, it will be classified as positive, otherwise negative. We compare our model with several previous embedding models presented in Related Work section. As we construct negative triplets for FB15k by ourselves, we use the codes of TransE, TransH and TransR/CTransR provied by (Lin et al. 2015) to evaluate the datasets instead of reporting the results of (Wang et al.2014; Lin et al. 2015) directly. In this experiment, we optimize the objective with ADADELTA SGD (Zeiler 2012). We select the margin γ among {1, 2, 5, 10}, the dimension of entity vectors m and the dimension of relation vectors n among {20, 50, 80, 100}, and the mini-batch size B among {100, 200, 1000, 4800}. The best configuration obtained by valid set are:γ = 1, m, n = 100, B = 1000 and taking L2 as dissimilarity on WN11; γ = 1, m, n = 100, B = 200 and taking L2 as dissimilarity on FB13; γ = 2, m, n = 100, B = 4800 and taking L1 as dissimilarity on FB15k. For all the three datasets, We traverse to training for 1000 rounds. As described in Related Work section, TransD trains much faster than TransR (On our PC, TransR needs 70 seconds and TransD merely spends 24 seconds a round on FB15k). Table 3 shows the evaluation results of triplets classification. On WN11, we found that there are 570 entities appearing in valid and test sets but not appearing in train set, we call them ”NULL Entity”. In valid and test sets, there are 1680 (6.4%) triplets containing ”NULL Entity”. In NTN(+E), these entity embeddings can be obtained by word embedding. In TransD, howData sets WN11 FB13 FB15K SE 53.0 75.2 SME(bilinear) 70.0 63.7 SLM 69.9 85.3 LFM 73.8 84.3 NTN 70.4 87.1 68.2 NTN(+E) 86.2 90.0 TransE(unif) 75.9 70.9 77.3 TransE(bern) 75.9 81.5 79.8 TransH(unif) 77.7 76.5 74.2 TransH(bern) 78.8 83.3 79.9 TransR(unif) 85.5 74.7 81.1 TransR(bern) 85.9 82.5 82.1 CTransR(bern) 85.7 84.3 TransD(unif) 85.6 85.9 86.4 TransD(bern) 86.4 89.1 88.0 Table 3: Experimental results of Triplets Classification(%). “+E” means that the results are combined with word embedding. ever, they are only initialized randomly. Therefore, it is not fair for TransD, but we also achieve the accuracy 86.4% which is higher than that of NTN(+E) (86.2%). From Table 3, we can conclude that: (1) On WN11, TransD outperforms any other previous models including TransE, TransH and TransR/CTransR, especially NTN(+E); (2) On FB13, the classification accuracy of TransD achieves 89.1%, which is significantly higher than that of TransE, TransH and TransR/CTransR and is near to the performance of NTN(+E) (90.0%); and (3) Under most circumstances, the ”bern” sampling method works better than ”unif”. Figure 3 shows the prediction accuracy of different relations. On the three datasets, different relations have different prediction accuracy: some are higher and the others are lower. Here we focus on the relations which have lower accuracy. On WN11, the relation similar to obtains accuracy 51%, which is near to random prediction accuracy. In the view of intuition, similar to can be inferred from other information. However, the number of entity pairs linked by relation similar to is only 1672, which accounts for 1.5% in all train data, and prediction of the relation needs much information about entities. Therefore, the insufficient of train data is the main cause. On FB13, the accuracies of relations cuase of death and gender are lower than that of other relations because they are difficult to infer from other imformation, especially cuase of death. Relation gender may be inferred from a person’s name (Socher et al. 2013), but we learn a vector for each name, not for the words included in the names, which makes the 692 50 60 70 80 90 100 has_instance similar_to member_meronym domain_region subordinate_instance_of domain_topic member_holonym synset_domain_topic has_part part_of type_of Accuracy(%) WN11 unif bern 50 60 70 80 90 100 cause_of_death gender profession religion nationality institution ethnicity Accuracy(%) FB13 unif bern 50 60 70 80 90 100 45 50 55 60 65 70 75 80 85 90 95 100 Accuracy(%) of "bern" Accuracy(%) of "unif" FB15K Figure 3: Classification accuracies of different relations on the three datasets. For FB15k, each triangle represent a relation, in which the red triangles represent the relations whose accuracies of “bern” or “unif” are lower than 50% and the blacks are higher than 50%. The red line represents the function y = x. We can see that the most relations are in the lower part of the red line. names information useless for gender. On FB15k, accuracies of some relations are lower than 50%, for which some are lack of train data and some are difficult to infer. Hence, the ability of reasoning new facts based on knowledge graphs is under a certain limitation, and a complementary approach is to extract facts from plain texts. 4.3 Link Prediction Link prediction is to predict the missing h or t for a golden triplet (h, r, t). In this task, we remove the head or tail entity and then replace it with all the entities in dictionary in turn for each triplet in test set. We first compute scores of those corrupted triplets and then rank them by descending order; the rank of the correct entity is finally stored. The task emphasizes the rank of the correct entity instead of only finding the best one entity. Similar to (Bordes et al. 2013), we report two measures as our evaluation metrics: the average rank of all correct entites (Mean Rank) and the proportion of correct entities ranked in top 10 (Hits@10). A lower Mean Rank and a higher Hits@10 should be achieved by a good embedding model. We call the evaluation setting ”Raw’. Noting the fact that a corrupted triplet may also exist in knowledge graphs, the corrupted triplet should be regard as a correct triplet. Hence, we should remove the corrupted triplets included in train, valid and test sets before ranking. We call this evaluation setting ”Filter”. In this paper, we will report evaluation results of the two settings . In this task, we use two datasets: WN18 and FB15k. As all the data sets are the same, we refer to their experimental results in this paper. On WN18, we also use ADADELTA SGD (Zeiler 2012) for optimization. We select the margin γ among {0.1, 0.5, 1, 2}, the dimension of entity vectors m and the dimension of relation vectors n among {20, 50, 80, 100}, and the mini-batch size B among {100, 200, 1000, 1400}. The best configuration obtained by valid set are:γ = 1, m, n = 50, B = 200 and taking L2 as dissimilarity. For both the two datasets, We traverse to training for 1000 rounds. Experimental results on both WN18 and FB15k are shown in Table 4. From Table 4, we can conclude that: (1) TransD outperforms other baseline embedding models (TransE, TransH and TransR/CTransR), especially on sparse dataset, i.e., FB15k; (2) Compared with CTransR, TransD is a more fine-grained model which considers the multiple types of entities and relations simultaneously, and it achieves a better performance. It indicates that TransD handles complicated internal correlations of entities and relations in knowledge graphs better than CTransR; (3) The “bern” sampling trick can reduce false negative labels than “unif”. For the comparison of Hits@10 of different kinds of relations, Table 5 shows the detailed results by mapping properties of relations1 on FB15k. From Table 5, we can see that TransD outperforms TransE, TransH and TransR/CTransR significantly in both “unif” and “bern” settings. TransD achieves better performance than CTransR in all types of relations (1-to-1, 1-to-N, N-to-1 and N-to-N). For N-to-N relations in predicting both head and tail, our approach improves the Hits@10 by almost 7.4% than CTransR. In particular, for 1Mapping properties of relations follows the same rules in (Bordes et al. 2013) 693 Data sets WN18 FB15K Metric Mean Rank Hits@10 Mean Rank Hits@10 Raw Filt Raw Filt Raw Filt Raw Filt Unstructured (Bordes et al. 2012) 315 304 35.3 38.2 1,074 979 4.5 6.3 RESCAL (Nickle, Tresp, and Kriegel 2011) 1,180 1,163 37.2 52.8 828 683 28.4 44.1 SE (Bordes et al. 2011) 1,011 985 68.5 80.5 273 162 28.8 39.8 SME (linear) (Bordes et al.2012) 545 533 65.1 74.1 274 154 30.7 40.8 SME (Bilinear) (Bordes et al. 2012) 526 509 54.7 61.3 284 158 31.3 41.3 LFM (Jenatton et al. 2012) 469 456 71.4 81.6 283 164 26.0 33.1 TransE (Bordes et al. 2013) 263 251 75.4 89.2 243 125 34.9 47.1 TransH (unif) (Wang et al. 2014) 318 303 75.4 86.7 211 84 42.5 58.5 TransH (bern) (Wang et al. 2014) 401 388 73.0 82.3 212 87 45.7 64.4 TransR (unif) (Lin et al. 2015) 232 219 78.3 91.7 226 78 43.8 65.5 TransR (bern) (Lin et al. 2015) 238 225 79.8 92.0 198 77 48.2 68.7 CTransR (unif) (Lin et al. 2015) 243 230 78.9 92.3 233 82 44.0 66.3 CTransR (bern) (Lin et al. 2015) 231 218 79.4 92.3 199 75 48.4 70.2 TransD (unif) 242 229 79.2 92.5 211 67 49.4 74.2 TransD (bern) 224 212 79.6 92.2 194 91 53.4 77.3 Table 4: Experimental results on link prediction. Tasks Prediction Head (Hits@10) Prediction Tail (Hits@10) Relation Category 1-to-1 1-to-N N-to-1 N-to-N 1-to-1 1-to-N N-to-1 N-to-N Unstructured (Bordes et al. 2012) 34.5 2.5 6.1 6.6 34.3 4.2 1.9 6.6 SE (Bordes et al. 2011) 35.6 62.6 17.2 37.5 34.9 14.6 68.3 41.3 SME (linear) (Bordes et al.2012) 35.1 53.7 19.0 40.3 32.7 14.9 61.6 43.3 SME (Bilinear) (Bordes et al. 2012) 30.9 69.6 19.9 38.6 28.2 13.1 76.0 41.8 TransE (Bordes et al. 2013) 43.7 65.7 18.2 47.2 43.7 19.7 66.7 50.0 TransH (unif) (Wang et al. 2014) 66.7 81.7 30.2 57.4 63.7 30.1 83.2 60.8 TransH (bern) (Wang et al. 2014) 66.8 87.6 28.7 64.5 65.5 39.8 83.3 67.2 TransR (unif) (Lin et al. 2015) 76.9 77.9 38.1 66.9 76.2 38.4 76.2 69.1 TransR (bern) (Lin et al. 2015) 78.8 89.2 34.1 69.2 79.2 37.4 90.4 72.1 CTransR (unif) (Lin et al. 2015) 78.6 77.8 36.4 68.0 77.4 37.8 78.0 70.3 CTransR (bern) (Lin et al. 2015) 81.5 89.0 34.7 71.2 80.8 38.6 90.1 73.8 TransD (unif) 80.7 85.8 47.1 75.6 80.0 54.5 80.7 77.9 TransD (bern) 86.1 95.5 39.8 78.5 85.4 50.6 94.4 81.2 Table 5: Experimental results on FB15K by mapping properities of relations (%). N-to-1 relations (predicting head) and 1-to-N relations (predicting tail), TransD improves the accuracy by 9.0% and 14.7% compared with previous state-of-the-art results, respectively. Therefore, the diversity of entities and relations in knowledge grahps is an important factor and the dynamic mapping matrix is suitable for modeling knowledge graphs. 5 Properties of Projection Vectors As mentioned in Section ”Introduction”, TransD is based on the motivation that each mapping matrix is determined by entity-relation pair dynamically. These mapping matrices are constructed with projection vectors of entities and relations. Here, we analysis the properties of projection vectors. We seek the similar objects (entities and relations) for a given object (entities and relations) by projection vectors. As WN18 has the most entities (40,943 entities which contains various types of words. FB13 also has many entities, but the most are person’s names) and FB15k has the most relations (1,345 relations), we show the similarity of projection vectors on them. Table 6 and 7 show that the same category objects have similar projection vectors. The similarity of projection vectors of different types of entities and relations indicates the rationality of our method. 6 Conclusions and Future Work We introduced a model TransD that embed knowledge graphs into continues vector space for their completion. TransD has less complexity and more flexibility than TransR/CTransR. When learning embeddings of named symbol objects (entities or relations), TransD considers the diversity of them both. Extensive experiments show that TransD outperforms TrasnE, TransH and TransR/CTransR on two tasks including triplets classification and link prediction. As shown in Triplets Classification section, not all new facts can be deduced from the exist694 Datesets WN18 Entities and Definitions upset VB 4 cause to overturn from an upright or normal position srbija NN 1 a historical region in central and northern Yugoslavia Similar Entities and Definitions sway VB 4 cause to move back and forth montenegro NN 1 a former country bordering on the Adriatic Sea shift VB 2 change place or direction constantina NN 1 a Romanian resort city on the Black Sea flap VB 3 move with a thrashing motion lappland NN 1 a region in northmost Europe inhabited by Lapps fluctuate VB 1 cause to fluctuate or move in a wavelike pattern plattensee NN 1 a large shallow lake in western Hungary leaner NN 1 (horseshoes) the throw of a horseshoe so as to lean against (but not encircle) the stake brasov NN 1 a city in central Romania in the foothills of the Transylvanian Alps Table 6: Entity projection vectors similarity (in descending order) computed on WN18. The similarity scores are computed with cosine function. Datesets FB15k Relation /location/statistical region/rent50 2./measurement unit/dated money value/currency Similar relations /location/statistical region/rent50 3./measurement unit/dated money value/currency /location/statistical region/rent50 1./measurement unit/dated money value/currency /location/statistical region/rent50 4./measurement unit/dated money value/currency /location/statistical region/rent50 0./measurement unit/dated money value/currency /location/statistical region/gdp nominal./measurement unit/dated money value/currency Relation /sports/sports team/roster./soccer/football roster position/player Similar relations /soccer/football team/current roster./sports/sports team roster/player /soccer/football team/current roster./soccer/football roster position/player /sports/sports team/roster./sports/sports team roster/player /basketball/basketball team/historical roster./sports/sports team roster/player /sports/sports team/roster./basketball/basketball historical roster position/player Table 7: Relation projection vectors similarity computed on FB15k. The similarity scores are computed with cosine function. ing triplets in knowledge graphs, such as relations gender, place of place, parents and children. These relations are difficult to infer from all other information, but they are also useful resource for practical applications and incomplete, i.e. the place of birth attribute is missing for 71% of all people included in FreeBase (Nickel, et al. 2015). One possible way to obtain these new triplets is to extract facts from plain texts. We will seek methods to complete knowledge graphs with new triplets whose entities and relations come from plain texts. Acknowledgments This work was supported by the National Basic Research Program of China (No. 2014CB340503) and the National Natural Science Foundation of China (No. 61272332 and No. 61202329). References George A. Miller. 1995. WordNet: A lexical database for english. Communications of the ACM, 38(11):39-41. Bollacker K., Evans C., Paritosh P., Sturge T., and Taylor J. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data. pages 1247-1250. Fabian M. Suchanek, Kasneci G., Weikum G. 2007. YAGO: A core of semantic Knowledge Unifying WordNet and Wikipedia. In Proceedings of the 16th international conference on World Wide Web. Bordes A., Usunier N., Garcia-Dur´an A. 2013. Translating Embeddings for Modeling Multi-relational Data. In Proceedings of NIPS. pags:2787-2795. Wang Z., Zhang J., Feng J. and Chen Z. 2014. Knowledge graph embedding by translating on hyperplanes. In Proceedings of AAAI. pags:11121119. Lin Y., Zhang J., Liu Z., Sun M., Liu Y., Zhu X. 2015. Learning Entity and Relation Embeddings for Knowledge Graph Completion. In Proceedings of AAAI. Bordes A., Glorot X., Weston J., and Bengio Y. 2012. Joint learning of words and meaning representations for open-text semantic parsing. In Proceedings of AISTATS. pags:127-135. Bordes A., Glorot X., Weston J., and Bengio Y. 2014. A semantic matching energy function for learing with multirelational data. Machine Learning. 94(2):pags:233-259. Bordes A., Weston J., Collobert R., and Bengio Y. 2011. Learning structured embeddings of knowledge bases. In Proceedings of AAAI. pags:301-306. 695 Jenatton R., Nicolas L. Roux, Bordes A., and Obozinaki G. 2012. A latent factor model for highly multi-relational data. In Proceedings of NIPS. pags:3167-3175. Sutskever I., Salakhutdinov R. and Joshua B. Tenenbaum. 2009. Modeling Relational Data using Bayesian Clustered Tensor Factorization. In Proceedings of NIPS. pags:1821-1828. Socher R., Chen D., Christopher D. Manning and Andrew Y. Ng. 2013. Reasoning With Neural Tensor Networks for Knowledge Base Completion. In Proceedings of NIPS. pags:926-934. Weston J., Bordes A., Yakhnenko O. Manning and Ununier N. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of EMNLP. pags:1366-1371. Matthew D. Zeiler. 2012. ADADELTA: AN ADAPTIVE LEARNING RATE METHOD. In Proceedings of CVPR. Socher R., Huval B., Christopher D Manning. Manning and Andrew Y. Ng. 2012. Semantic Compositionality through Recursive Matrix-vector Spaces. In Proceedings of EMNLP. Nickel M., Tresp V., Kriegel H-P. 2011. A threeway model for collective learning on multi-relational data. In Proceedings of ICML. pages:809-816. Nickel M., Tresp V., Kriegel H-P. 2012. Factorizing YAGO: Scalable Machine Learning for Linked Data. In Proceedings of WWW. Nickel M., Tresp V. 2013a. An Analysis of Tensor Models for Learning from Structured Data. Machine Learning and Knowledge Discovery in Databases, Springer. Nickel M., Tresp V. 2013b. Tensor Factorization for Multi-Relational Learning. Machine Learning and Knowledge Discovery in Databases, Springer. Nickel M., Murphy K., Tresp V., Gabrilovich E. 2015. A Review of Relational Machine Learning for Knowledge Graphs. In Proceedings of IEEE. 696
2015
67
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 697–707, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics How Far are We from Fully Automatic High Quality Grammatical Error Correction? Christopher Bryant Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Hwee Tou Ng Department of Computer Science National University of Singapore 13 Computing Drive Singapore 117417 [email protected] Abstract In this paper, we first explore the role of inter-annotator agreement statistics in grammatical error correction and conclude that they are less informative in fields where there may be more than one correct answer. We next created a dataset of 50 student essays, each corrected by 10 different annotators for all error types, and investigated how both human and GEC system scores vary when different combinations of these annotations are used as the gold standard. Upon learning that even humans are unable to score higher than 75% F0.5, we propose a new metric based on the ratio between human and system performance. We also use this method to investigate the extent to which annotators agree on certain error categories, and find that similar results can be obtained from a smaller subset of just 10 essays. 1 Introduction Interest in grammatical error correction (GEC) systems has grown considerably in the past few years, thanks mainly to the success of the recent Helping Our Own (HOO) (Dale and Kilgarriff, 2011; Dale et al., 2012) and Conference on Natural Language Learning (CoNLL) (Ng et al., 2013; Ng et al., 2014) shared tasks. Despite this increasing attention, however, one of the most significant challenges facing GEC today is the lack of a robust evaluation practice. In fact Chodorow et al. (2012) even go as far to say that it is sometimes “hard to draw meaningful comparisons between different approaches, even when they are evaluated on the same corpus.” One of the reasons for this is that, traditionally, system performance has only ever been evaluated against the gold standard annotations of a single native speaker (rarely, two native speakers). As such, system output is not actually scored on the basis of grammatical acceptability alone, but rather is also constrained by the idiosyncrasies of the particular annotators. The obvious solution to this problem would be to compare systems against the gold standard annotations of multiple annotators, in an effort to dilute the effect of individual annotator bias, however creating manual annotations is often considered too time consuming and expensive. In spite of this, while other studies have instead elected to use crowdsourcing to produce multiply-corrected annotations, often concerning only a limited number of error types (Madnani et al., 2011; Pavlick et al., 2014; Tetreault et al., 2014), one of the main contributions of this paper is the provision of a dataset of 10 human expert annotations, annotated in the tradition of CoNLL-2014, that is moreover annotated for all error types.1 With this new dataset, we have, for the first time, been able to compare system output against the gold standard annotations of a larger group of human annotators, in a realistic grammar checking scenario, and consequently been able to quantify the extent to which additional annotators affect system performance. Additionally, we also noticed that some annotators tend to agree on certain error categories more than others and so attempt to explain this. In light of the results, we also explore how human annotators themselves compare against the combined annotations of the remaining annotators and thus calculate an upper bound F0.5 score for the given dataset and number of annotators; e.g., if one human versus nine other humans is only able to score a maximum of 70% F0.5, then it is unreasonable to expect a machine to do better. For this reason, we propose a more informative method of 1http://www.comp.nus.edu.sg/˜nlp/sw/ 10gec_annotations.zip 697 evaluating a system based on the ratio of that system’s F0.5 score against the equivalent human F0.5 score. Section 2 contains an overview of some of the latest research in both GEC and SMT that makes use of IAA statistics. Section 3 shows an example sentence from our dataset and qualitatively analyses how individual annotator bias affects their choice of corrections. Section 4 describes the data collection process and presents some preliminary results. Section 5 discusses the main quantitative results of the paper, formalizing the formulas used and introducing the more informative method of ratio scoring for GEC, while Section 6 summarizes the results from our additional experiments on category agreement and essay subsets. Section 7 concludes the paper. 2 Inter-Annotator Agreement (IAA) Whenever we discuss multiple annotators, researchers invariably raise the issue of interannotator agreement (IAA), or rather the extent to which annotators agree with each other. This is because data which shows a higher level of agreement is often believed to be in some way more reliable than data which has a lower agreement score. Within GEC, agreement has often been reported in terms of Cohen’s-κ (Cohen, 1960), although other agreement statistics could also be used.2 In the rest of this section, however, we wish to challenge the use of IAA statistics in GEC and question their value in this field. Specifically, while IAA statistics may be informative in areas where items can be classified into single, welldefined categories, such as in part-of-speech tagging, we argue that they are less well-suited to GEC and SMT, where there is often more than one correct answer. For example, two annotators may correct or translate a given sentence in two completely different yet valid ways, but IAA statistics are only able to interpret the alternative answers as disagreements. 2.1 Inter-Annotator Agreement in GEC One important study that made use of κ as a measure of agreement between raters is by Tetrault and Chodorow (2008) (also in Tetreault et al. (2014)), who asked two native English speakers to insert a missing preposition into 200 randomly chosen, 2See Hayes and Krippendorff (2007) or Artstein and Poesio (2008) for the pros and cons of different IAA metrics. well-formed sentences from which a single preposition had been removed. Despite the simplicity of this correction task, the authors reported κ-agreement of just 0.7, noting that in cases where the raters disagreed, their disagreements were often “licensed by context” and thus actually “acceptable alternatives”. This led them to conclude that they would “expect even more disagreement when the task is preposition error detection in ’noisy’ learner texts” and, by extension, imply that detection of all error types in ’noisy’ texts would show more disagreement still. The most important question to ask then, as a result of this study, is whether low κ-scores in ’noisy’ texts are truly indicative of real disagreement, or whether, as in this preposition test, the disagreement is actually the result of multiple correct answers, and therefore not disagreement at all. In a related study, and aware of the fact that there are often multiple ways to correct individual words in sentence, Rozovskaya and Roth (2010) instead chose to compute agreement at the sentence level. Specifically, three raters were asked simply to decide whether they thought 200 sentences were correct or not. This time, despite operating at the more general sentence level, the authors reported κ scores of just 0.16, 0.4 and 0.23, surmising that “the low numbers reflect the difficulty of the task and the variability of the native speakers’ judgments about acceptable usage.” If that is the case, then true disagreement may be indistinguishable from native variability, and we should be wary of using IAA statistics as a measure of agreement or evaluation in GEC. 2.2 Inter-Annotator Agreement in SMT In fact, the issues regarding the reliability of IAA metrics are not unique to GEC and we can also draw a parallel with the field of statistical machine translation (SMT). In the same way that there is often more than one way to correct a sentence in GEC, it is also well known that there is often more than one way to translate a sentence in SMT. Nevertheless, while several papers have successfully discussed ways to minimize annotator bias effects in SMT (Snover et al., 2006; Madnani et al., 2008), IAA metrics such as κ still unhelpfully play a role in the field and have, for example, been reported almost every year in the Workshop on Machine Translation (WMT) conference. 698 Source: To put it in the nutshell, I believe that people should have the obligation to tell their relatives about the genetic testing result for the good of their health. A1 To put it in a nutshell, I believe that people should be obliged to tell their relatives about their genetic test results for the good of their health. A2 In a nutshell, I believe that people should have an obligation to tell their relatives about the genetic testing result for the good of their health. A3 In summary, I believe that people should have the obligation to tell their relatives about the genetic testing result for the good of their health. A4 In a nutshell, I believe that people should be obligated to tell their relatives about the genetic testing result for the good of their health. A5 To put it in a nutshell, I believe that people should be obligated to tell their relatives about the genetic testing results for the good of their health. A6 To put it in the nutshell, I believe that people should have an obligation to tell their relatives about their genetic test results for the good of their health. A7 To put it in a nutshell, I believe that people should have the obligation to tell their relatives about the genetic testing result for the good of their health. A8 To put it in a nutshell, I believe that people should be obligated to tell their relatives about the genetic testing result for the good of their health. A9 To put it in a nutshell, I believe that people should have the obligation to tell their relatives about the genetic test result for the good of their health. A10 To put it in a nutshell, I believe that people should have the obligation to tell their relatives about the genetic test results for the good of their health. Table 1: Table showing how each of the 10 annotators edited the same source sentence in Essay 25. The words in the source sentence that were changed are highlighted in bold. This is in spite of the fact that the average interannotator κ score across all language pairs over the past five years has never been higher than 0.4 (Bojar et al., 2014). One important paper that attempts to explain why IAA metrics score so poorly in SMT is by Lommel et al. (2014), who asked annotators to highlight and categorize sections of automatically translated text they believed to be erroneous. Their results showed that while annotators were often able to agree on the rough locations of errors, they often disagreed as to the specific boundaries of those errors: for instance, given the phrase “had go”, some annotators considered just the participle “go” →“gone” to be the minimal error, while others considered the whole verbal unit, “had go” →“had gone”, to be the minimal error. Similarly, the authors also noted that annotators sometimes had problems categorizing ambiguous errors which could be classified into more than one error category. In short, while annotators already vary as to what they consider an error, these observations show that even when they do apparently agree, there is no guarantee that every annotator will define the error in exactly the same terms. This poses a problem for IAA statistics, which rely on an exact match to measure agreement. Finally, it is also worth mentioning that a related study, by Denkowski and Lavie (2010), suggested that “annotators also have difficulty agreeing with themselves” (shown from intra-annotator agreement κ scores of about 0.6), and so we should be especially wary of using IAA metrics to validate datasets that may even be unreliable for a single annotator. 3 Annotator Bias In an effort to better understand how annotators’ judgments might differ, we first carried out a small-scale qualitative analysis on a handful of random sentences corrected by the 10 human annotators in our dataset. One such sentence, and all its various corrections, is shown in Table 1. It is interesting to note that, for even as short an idiom as “To put it in the nutshell’, there are still multiple alternative edits. Although 8 out of the 10 annotators elected to replace the article “the” with “a”, among them, A2 and A4 also deleted “To put it” from the expression. Of the remaining 2 annotators, A3 chose to replace the idiom entirely with “In summary”, while A6 made no correction at all. Although no correction appears to be unacceptable to the majority of annotators, it is also not completely ungrammatical (just idiomatically awkward) so it may be that A6 has a higher tolerance for this kind of error than the other annotators. Alternatively, there is also always the possibility that, given such a large amount of text to correct, this error was simply overlooked. Another noteworthy difference is that annotators A1, A4, A5, and A8 all elected to change the 699 verb “have the obligation” from active to passive, although A1 still disagreed with the others on the form of the participle. Similarly, there is also a great difference of opinion on whether “testing result” should be corrected or not, and if so, how. While half of the annotators left the phrase unchanged, A1, A6, and A10 all changed both words to “test results”. Meanwhile, somewhere in between, A5 decided to change “result” to “results”, but not “testing” to “test”, while, conversely, A9 decided to do the opposite. This would suggest that error correction of even minor phrases falls along a continuum governed by each annotator’s natural bias. Finally, one of the most important results of this qualitative evaluation is that even though all 10 annotators edited the same sentence to a level they deemed grammatical, not one single annotator agreed with another exactly. This fact alone suggests IAA statistics are not a good way to evaluate GEC data and that a more robust agreement metric must take into account the possibility of alternative correct answers. 4 Data Collection The raw text data in our dataset was originally produced by 25 students at the National University of Singapore (NUS) who were non-native speakers of English. They were asked to write two essays on the topics of genetic testing and social media respectively. All essays were of similar length and quality. This was important because varying the skill level of the essays is likely to further affect the natural bias of the annotators, who may then consistently over- or under-correct essays. These raw essays also formed the basis of the CoNLL2014 test data (Ng et al., 2014). See Table 2 for some basic statistics on the resulting 50 essays. The 10 annotators who annotated all 50 essays include: the 2 official annotators of CoNLL-2014, the first author of this paper, and 7 freelancers who were recruited via online recruitment website, Elance.3 All annotators are native British English speakers, many of whom also have backgrounds in English language teaching, proofreading, and/or Linguistics. All annotations were made using an online annotation platform, WAMP, especially designed for annotating ESL errors (Dahlmeier et al., 2013). Using this platform, annotators were asked to 3http://www.elance.com Total Average per essay # Paragraphs 252 5.0 # Sentences 1312 26.2 # Tokens 30144 602.9 Table 2: Statistics for the 50 unannotated essays. highlight a minimal error string in the source text, provide an appropriate correction, and then categorize their selection according to the same 28category error framework used by CoNLL-2014. Before commencing annotation, however, each annotator was given detailed instructions on how to use the tool, along with an explanation of each of the error categories. In cases of uncertainty, annotators were also encouraged to ask questions. As it was slightly harder to control the quality of the 7 independently recruited annotators via Elance, they were each preliminarily asked to annotate only the first two essays before being given detailed feedback on their work. The main purpose of this feedback was to make sure that they a) understood the error category framework, and b) knew how to deal with more complicated cases such as word insertions, punctuation, etc. Unless it was felt that they had overlooked an obvious error in these first two essays, the feedback did not go so far as to tell annotators what they should and should not highlight in an effort to preserve individual annotator bias. In all, while the specific time taken to complete annotation of all 50 essays was not calculated, all annotators completed the task over a period of about 3 weeks, at a rate of about 45 minutes per essay. 4.1 Early Observations To investigate the extent to which different annotators have different biases, we first counted the total number of edits made by each annotator and sorted them by error category (Table 3). As can be seen, there is quite a difference between the annotator who made the most edits (A1) and the annotator who made the fewest edits (A7), with A1 making more than twice the number of edits as A7. This just goes to show how varied judgments on grammaticality can be. Incidentally, annotators A3 and A7, who are among those who made the fewest edits, were also the two official gold standard annotators in CoNLL-2014. There is also a large difference between edits in 700 Category A1 A2 A3 A4 A5 A6 A7 A8 A9 A10 Total ArtOrDet 879 639 443 503 665 620 331 358 390 624 5452 Cit 0 0 0 0 0 1 0 2 0 0 3 Mec 227 376 493 325 411 336 228 733 598 780 4507 Nn 404 290 228 264 360 300 215 254 277 365 2957 Npos 21 21 15 21 31 28 19 25 29 23 233 Others 42 186 49 116 95 43 44 34 125 105 839 Pform 431 52 18 57 30 83 47 53 19 18 808 Pref 4 79 153 18 223 53 96 92 250 180 1148 Prep 755 488 390 421 502 556 211 276 362 459 4420 Rloc– 488 308 199 331 187 244 94 174 296 240 2561 Sfrag 1 5 1 3 1 5 13 2 12 2 45 Smod 1 4 5 0 1 0 0 3 1 1 16 Spar 0 18 24 0 2 11 3 2 8 0 68 Srun 157 38 21 16 17 18 7 15 17 37 343 Ssub 74 54 10 4 25 81 68 21 18 82 437 SVA 162 123 154 95 140 114 105 132 144 144 1313 Trans 248 100 78 147 118 81 93 199 87 95 1246 Um 5 12 42 25 25 12 12 19 7 8 167 V0 137 35 37 50 81 69 31 58 51 85 634 Vform 388 168 91 100 156 125 132 78 122 124 1484 Vm 71 48 37 67 119 24 49 39 4 62 520 Vt 100 209 150 200 82 237 133 234 117 188 1650 Wa 0 1 1 3 1 1 0 2 4 2 15 Wci 623 476 479 446 456 595 340 250 212 346 4223 Wform 126 107 103 150 136 145 77 103 107 81 1135 WOadv 23 48 27 23 61 76 12 94 41 62 467 WOinc 187 67 54 78 53 74 22 24 87 103 749 Wtone 6 30 15 65 38 27 9 10 12 15 227 Total 5560 3982 3317 3528 4016 3959 2391 3286 3397 4231 37667 Table 3: Table showing how many annotations each annotator made in terms of error category. See Ng et al. (2014) Table 1 for a more detailed description of error categories. terms of category use, with almost half of all edits falling into the categories for article or determiner (ArtOrDet), spelling or punctuation (Mec), preposition (Prep), or word choice (Wci) errors. 5 Quantitative Analysis In the main phase of experimentation, we first investigated how different numbers of annotators affected the performance of various systems in the context of the CoNLL-2014 shared task. To do this, we downloaded the official system output of all the participating teams4 and then the MaxMatch (M2) Scorer5 (Dahlmeier and Ng, 2012), which was the official scorer of the previous CoNLL-2013 and CoNLL-2014 shared tasks. This scorer evaluates a system at the sentence level in terms of correct edits, proposed edits, and gold edits, and uses these to calculate an F-score for each team. When more than one set of gold standard annotations is available, the scorer will calculate F-scores for each alternative 4http://www.comp.nus.edu.sg/˜nlp/ conll14st/official_submissions.tar.gz 5http://www.comp.nus.edu.sg/˜nlp/sw/ m2scorer.tar.gz gold-standard sentence and choose the one from whichever annotator scored the highest. As in CoNLL-2014, we calculate F0.5, which weights precision twice as much as recall, because it is more important for a system to be accurate than to correct every possible error. See (Ng et al., 2014) for more details on how F0.5 is calculated. 5.1 Pairwise Evaluation In order to quantify how much the F-score can vary in a realistic grammar checking scenario when there is only one gold standard annotator, we first computed the scores for a participating system vs each annotator in a pairwise fashion. Table 4 hence shows how the top team in CoNLL-2014, CAMB (Felice et al., 2014), performed against each of the 10 human annotators individually. While Tetrault and Chodorow (2008) and Tetreault et al. (2014) reported a difference of 10% precision and 5% recall between their two individual annotators in their simplified preposition correction task, Table 4 shows this difference can actually be as much as almost 15% precision (A1 vs A7) and 6% recall (A1 vs A3) in a more realistic full scale correction task. This equates to a differ701 CAMB P R F0.5 A1 39.64 14.06 29.06 A2 35.73 17.35 29.48 A3 35.22 20.29 30.70 A4 32.69 17.88 28.04 A5 35.74 17.26 29.43 A6 35.76 17.73 29.72 A7 24.96 19.62 23.67 A8 29.17 16.92 25.48 A9 32.03 18.28 27.84 A10 35.52 16.26 28.72 Table 4: Table showing the F0.5 scores for the top team in CoNLL-2014, CAMB, against each of the 10 annotators individually. ence of over 7% F0.5 (A3 vs A7) and once again shows how varied annotator’s judgments can be. 5.2 All Combinations 5.2.1 Human vs Human Whereas previously we could only calculate F0.5 scores on a system vs human basis, when there are two or more annotators, we can also calculate scores on a human vs human basis. In fact, as the number of annotators increases, we can also start to calculate scores against different combinations of gold standard annotations.6 To give an example, since we have 10 annotators, a subset of these annotators, say annotators a2–a8, could be chosen as the gold standard annotations. We could then evaluate how each of the remaining annotators (i.e., annotator a1, a9, and a10) performs against this gold standard, by computing the M2 score for annotator a1 against annotators a2–a8, annotator a9 against annotators a2–a8, and annotator a10 against annotators a2–a8. We then average these 3 M2 scores, to determine how, on average, an annotator performs when measured against gold standard annotators a2–a8. It is worth reiterating, however, that when more than one annotator is used as the gold standard, the M2 scorer will choose whichever annotator for the given sentence produces the highest F-score; i.e., if a2–a8 are the gold standard and we want to compute the F-score for a9, the M2 scorer will compute a9 vs a2, a9 vs a3, . . . , a9 vs a8 separately for each sentence, and choose the highest. 6Note that by combinations of annotators, we mean simply that the M2 scorer has access to a larger number of alternative gold standard corrections; we do not attempt to merge annotations in any way. The above calculations can be formalized as Equation 1: g(X) = 1 |A| −|X| X a∈A\X f(a, X) (1) where A is the set of all annotators (|A| = 10 in our case) and X is a non-empty and proper subset of A, denoting the set of annotators chosen to be in the gold standard. The function f(a, X) is the score computed by the M2 scorer to evaluate annotator a against each set of gold standard annotators X. g(X) is thus the average M2 scores for the remaining annotators against the input gold standard combination X. So far, in our example, we have chosen annotators a2–a8 to be the gold standard. There are, however, many other different ways of choosing 7 annotators to serve as the gold standard. For example, we could have chosen { a1, a2, ..., a7 }, { a1, a3, a4, ..., a8 }, etc. In fact, there are 10 7  = 120 different combinations of 7 annotators. As such, we can also compute how an individual human annotator performs when measured against any combination of 7 gold standard annotators, by averaging these 120 M2 scores. The above calculation is formalized in the general case in Equation 2: hi = 1 |A| |X|  X X:|X|=i g(X) (2) where |A| |X|  is the binomial coefficient for |A| choose |X| and 1 ≤i < |A|. The function g(X) is defined in Equation 1. The resulting hi values are hence the average F0.5 scores achieved by any human against any combination of i other humans, and so, in some ways, also represent the upper bound of human performance on the current dataset. The specific values for hi are shown in the second column of Table 5. 5.2.2 Caveat One caveat regarding this method is that the number of all possible combinations of annotators is of the order 2|A|, which quickly becomes computationally expensive for large values of |A|. Fortunately however, in a realistic GEC evaluation scenario, it is only the last row of Table 5 that we are most interested in, and so it is actually only necessary to calculate a much more manageable |A| |A|−1  gold standard combinations, which is conveniently 702 Gold Human (hi) AMU CAMB CUUI Annotators (i) Avg F0.5 Avg F0.5 Ratio Avg F0.5 Ratio Avg F0.5 Ratio 1 45.91 24.20 52.71% 28.22 61.46% 26.76 58.29% 2 56.68 33.47 59.05% 37.77 66.64% 36.04 63.59% 3 61.83 38.35 62.03% 42.68 69.03% 40.76 65.92% 4 65.05 41.53 63.85% 45.87 70.51% 43.77 67.29% 5 67.33 43.84 65.11% 48.17 71.54% 45.94 68.23% 6 69.07 45.62 66.06% 49.93 72.29% 47.60 68.92% 7 70.45 47.06 66.80% 51.34 72.87% 48.94 69.46% 8 71.60 48.26 67.40% 52.50 73.32% 50.05 69.89% 9 72.58 49.28 67.90% 53.47 73.67% 50.99 70.25% Table 5: Table showing average human F0.5 scores over all combinations of 1 ≤i < 10 gold annotators compared to the same averages for the top 3 systems in CoNLL-2014, and the ratio percentage of each team’s average score versus the human average score. equal to the total number of annotators. We only compute all combinations here in order to quantify, for the first time, how much each additional annotator affects performance. 5.2.3 System vs Human In addition to calculating scores on a human vs human basis, we also calculated the F-scores for the top three CoNLL-2014 teams, AMU (JunczysDowmunt and Grundkiewicz, 2014), CAMB (Felice et al., 2014), and CUUI (Rozovskaya et al., 2014), versus all the combinations of humans (Equation 3). si = 1 |A| |X|  X X:|X|=i f(s, X) (3) Specifically, s ∈S, where S is the set of all three shared task systems, i.e., {AMU, CAMB, CUUI}, and f(s, X) is the same function in Equation 1 which is the score computed by the M2 scorer to evaluate system s against the set of annotators X chosen to be in the gold standard. The average F0.5 scores for each of the team’s systems versus increasing numbers of i annotators are also shown in Table 5. We notice from these scores that, as expected, both system and human performance increases as more annotators are used in a gold standard. We do now, however, have data that quantifies exactly how much each additional annotator affects the score. This effect can be more clearly seen in Figure 1. It is important to note, however, that even with 9 annotators, human output itself does not reach close to 100% F0.5 and instead, the difference between the systems and the humans is about 20% F0.5. Furthermore, the curves for humans and systems also remain roughly parallel, suggesting human corrections gain as much benefit as system corrections from larger sets of gold standard annotations. 5.3 Ratio Scoring In light of the above observation that even humans vs humans are unable to score 100% F0.5, it thus seems unreasonable to expect machines to do the same. As such, we propose that it is much more informative to score system output against the average performance of humans instead of against the theoretical maximum score. The ratio values for the three CoNLL-2014 teams against the human gold standards of various sizes are hence also reported in Table 5. The most important thing to note is that these figures are not only much higher than the low F0.5 values currently reported in the literature, they are also more representative of the state of the art. For instance, it is highly significant that we can report that the top system in CoNLL2014, CAMB, is actually able to perform 73% as reliably as a human, which suggests GEC may actually be a more viable technology than was previously thought. 6 Additional Experiments 6.1 Error Categories As well as carrying out experiments at the system level, we also carried out similar experiments at the error category level. More specifically, we recalculated the values of Equation 1 and 2 for cases where the set of annotations consisted of only a 703 1 2 3 4 5 6 7 8 9 0 20 40 60 80 100 Number of Gold Standard Annotators F0.5 Human AMU CAMB CUUI Figure 1: Graph showing how average F0.5 scores for humans and systems increase as the number of gold standard annotators also increases (all error types, 50 Essays). single specific error type. Since the participating teams in CoNLL-2014 were not asked to classify the type of errors their systems corrected, we were only able to calculate these new values using the 10 sets of human annotations. Like Figure 1, we can see from Figure 2 that the F0.5 performance of individual error types increases diminishingly as the number of annotators in the gold standard also increases. More importantly, however, we notice that some error types achieve much higher scores than others, which suggests some annotators agree on certain categories more than others. In particular, noun number (Nn) and subjectverb agreement (SVA) errors achieve the highest scores, at just under 90% F0.5, which is also not far from the 100% F0.5 that would be achieved if we had gold standard answers for all possible alternative corrections of this type. The most likely reason for this is that, as the correction of these error types typically only involves the addition or removal of an -s suffix, i.e., a minor change in number morphology, there is very little room for annotators to disagree. In contrast, the next highest category, article and determiner errors (ArtOrDet), has a slightly larger confusion set, {the, a/an, ϵ}, which may account for the slightly lower score. Similarly, the next group of error categories, spelling and punctuation 1 2 3 4 5 6 7 8 9 0 20 40 60 80 100 Number of Gold Standard Annotators F0.5 Nn V t WOinc SV A Wform Wci ArtOrDet Prep Mec Trans Figure 2: Graph showing how average F0.5 scores for various error categories increase as the number of gold standard annotators also increases (50 essays). Calculations based on human annotations only. (Mec), verb tense (Vt), and word form (Wform), which all often involve a similar type of edit operation to a word lemma, likewise have slightly larger confusion sets that include a larger variety of possible morphological inflections. It is likely that the next category, prepositions (Prep), also has a confusion set of a similar size. The last three categories, conjunctions (alltypes) (Trans), word order (WOinc) and word choice (Wci), are all notable because they perform significantly worse than the hitherto mentioned categories. The main reason for this is that these error types all typically have a scope much larger than most other categories in that they often involve changes at the structural or semantic level; e.g., changing an active to a passive or choosing a synonym. For this reason, there are often many more alternative ways to correct them, meaning they are also much more likely to be affected by annotator bias. 704 1 2 3 4 5 6 7 8 9 0 20 40 60 80 100 Number of Gold Standard Annotators F0.5 Human AMU CAMB CUUI Figure 3: Graph showing how average F0.5 scores for humans and systems increase as the number of gold standard annotators also increases (all error types, 10 Essays). 6.2 Essay Subsets Now that we had empirical evidence showing how F0.5 scores varied with the number of annotators, an additional question to ask was whether the same trends for 50 essays were also present in a smaller subset of essays. We therefore repeated the main experiment with all error types, but this time used just 10 essays (specifically, essays 1–10) in both the hypothesis and gold standard. The results are shown in Figure 3. Compared to Figure 1, the most significant difference between these two graphs is that the ranking for AMU and CUUI has changed, although not by much in terms of F0.5. The most likely reason for this is that the distribution of error types in the smaller subset of essays is better suited to AMU’s more general SMT approach than to CUUI’s more targeted classifier based approach. For instance, see Table 9 in Ng et al. (2014) to compare each team’s performance on different error types in the CoNLL-2014 shared task. In other words, while the overall relationship between the system and human scores on 10 and 50 essays remains more or less the same, researchers must be aware that smaller datasets may have more skewed error distributions, which in turn may affect system performance, dependent upon correction strategy. With a balanced test set though, it would seem feasible to carry out future evaluation research on as few as 10 essays (about 6000 words). 7 Conclusion To summarize, we first showed that 10 individual annotators can all correct the same sentence in 10 different ways, yet also all produce valid alternatives. This implies that inter-annotator agreement statistics, which rely on exact matching, are not well-suited to grammatical error correction, because it may not be the case that annotators truly disagree, but rather that they have a bias towards a particular type of alternative answer. We next showed that, as has long been suspected, increasing the number of annotators in the gold standard also leads to an increase in F0.5, although at a diminishing rate. This data can be used to help researchers decide how many gold standard annotations should be used in GEC evaluation. The main result of this paper however, is that by computing scores for human against human, we determined that it is not true that any human correction is able to score 100% F0.5. Instead, we found that the human upper bound is roughly 73% F0.5 and that the top 3 teams from CoNLL-2014 actually perform, on average, between 67-73% as reliably as this human upper bound. This result is highly significant, because it suggests GEC systems may actually be more viable than their previously low F0.5 scores would suggest. In addition to the above, we also found that humans tend to agree on some error categories more than others, and suggest that one of the main reasons for this concerns the size of the confusion set of the particular error type. Finally, not only are we making the corrections by 10 annotators of all 50 essays available with this paper, we also showed that the trends found in the data are also consistent with the annotations of just 10 essays, allowing future research to be conducted on much less text. Acknowledgments This research is supported by Singapore Ministry of Education Academic Research Fund Tier 2 grant MOE2013-T2-1-150. We would also like to thank the three anonymous reviewers for their comments. 705 References Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computational Linguistics, 34(4):555–596. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 Workshop on Statistical Machine Translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, pages 12–58, Baltimore, Maryland, USA, June. Association for Computational Linguistics. Martin Chodorow, Markus Dickinson, Ross Israel, and Joel R. Tetreault. 2012. Problems in evaluating grammatical error detection systems. In COLING, pages 611–628. Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37–46. Daniel Dahlmeier and Hwee Tou Ng. 2012. Better evaluation for grammatical error correction. In HLTNAACL, pages 568–572. Daniel Dahlmeier, Hwee Tou Ng, and Siew Mei Wu. 2013. Building a large annotated corpus of learner English: The NUS Corpus of Learner English. In Proceedings of the Eighth Workshop on Innovative Use of NLP for Building Educational Applications, pages 22–31, Atlanta, Georgia, USA. Robert Dale and Adam Kilgarriff. 2011. Helping Our Own: The HOO 2011 pilot shared task. In Proceedings of the Generation Challenges Session at the 13th European Workshop on Natural Language Generation, pages 242–249. Robert Dale, Ilya Anisimoff, and George Narroway. 2012. Helping Our Own: HOO 2012: A report on the preposition and determiner error correction shared task. In Proceedings of the Seventh Workshop on Innovative Use of NLP for Building Educational Applications, pages 54–62. Michael Denkowski and Alon Lavie. 2010. Choosing the right evaluation for machine translation: an examination of annotator and automatic metric performance on human judgment tasks. Proceedings of AMTA. Mariano Felice, Zheng Yuan, Øistein E Andersen, Helen Yannakoudakis, and Ekaterina Kochmar. 2014. Grammatical error correction using hybrid systems and type filtering. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 15–24. Andrew F Hayes and Klaus Krippendorff. 2007. Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1):77–89. Marcin Junczys-Dowmunt and Roman Grundkiewicz. 2014. The AMU system in the CoNLL-2014 shared task: Grammatical error correction by dataintensive and feature-rich statistical machine translation. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 25–33. Arle Richard Lommel, Maja Popovic, and Aljoscha Burchardt. 2014. Assessing inter-annotator agreement for translation error annotation. In MTE: Workshop on Automatic and Manual Metrics for Operational Translation Evaluation. Nitin Madnani, Philip Resnik, Bonnie J. Dorr, and Richard Schwartz. 2008. Are multiple reference translations necessary? Investigating the value of paraphrased reference translations in parameter optimization. Proceedings of the Eighth Conference of the Association for Machine Translation in the Americas, October. Nitin Madnani, Martin Chodorow, Joel R. Tetreault, and Alla Rozovskaya. 2011. They can help: Using crowdsourcing to improve the evaluation of grammatical error detection systems. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 508–513. Hwee Tou Ng, Siew Mei Wu, Yuanbin Wu, Christian Hadiwinoto, and Joel R. Tetreault. 2013. The CoNLL-2013 shared task on grammatical error correction. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning: Shared Task, pages 1–12, Sofia, Bulgaria. ACL. Hwee Tou Ng, Siew Mei Wu, Ted Briscoe, Christian Hadiwinoto, Raymond Hendy Susanto, and Christopher Bryant. 2014. The CoNLL-2014 shared task on grammatical error correction. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 1–14, Baltimore, Maryland, USA. ACL. Ellie Pavlick, Rui Yan, and Chris Callison-Burch. 2014. Crowdsourcing for grammatical error correction. In Proceedings of the Companion Publication of the 17th ACM Conference on Computer Supported Cooperative Work and Social Computing, CSCW Companion ’14, pages 209–212, New York, NY, USA. ACM. Alla Rozovskaya and Dan Roth. 2010. Annotating ESL errors: Challenges and rewards. In NAACL Workshop on Innovative Use of NLP for Building Educational Applications, pages 28–36. Alla Rozovskaya, Kai-Wei Chang, Mark Sammons, Dan Roth, and Nizar Habash. 2014. The IllinoisColumbia system in the CoNLL-2014 shared task. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning: Shared Task, pages 34–42. 706 Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and Ralph Weischedel. 2006. A study of translation error rate with targeted human annotation. In Proceedings of the Association for Machine Transaltion in the Americas. Joel R. Tetrault and Martin Chodorow. 2008. Native judgments of non-native usage: Experiments in preposition error detection. In COLING Workshop on Human Judgments in Computational Linguistics, pages 24–32, Manchester, UK. Joel R. Tetreault, Martin Chodorow, and Nitin Madnani. 2014. Bucking the trend: improved evaluation and annotation practices for ESL error detection systems. Language Resources and Evaluation, 48(1):5–31. 707
2015
68
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 708–718, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Knowledge Portability with Semantic Expansion of Ontology Labels Mihael Arcan1 Marco Turchi2 Paul Buitelaar1 1 Insight Centre for Data Analytics, National University of Ireland, Galway [email protected] 2 FBK- Fondazione Bruno Kessler, Via Sommarive 18, 38123 Trento, Italy [email protected] Abstract Our research focuses on the multilingual enhancement of ontologies that, often represented only in English, need to be translated in different languages to enable knowledge access across languages. Ontology translation is a rather different task then the classic document translation, because ontologies contain highly specific vocabulary and they lack contextual information. For these reasons, to improve automatic ontology translations, we first focus on identifying relevant unambiguous and domain-specific sentences from a large set of generic parallel corpora. Then, we leverage Linked Open Data resources, such as DBPedia, to isolate ontologyspecific bilingual lexical knowledge. In both cases, we take advantage of the semantic information of the labels to select relevant bilingual data with the aim of building an ontology-specific statistical machine translation system. We evaluate our approach on the translation of a medical ontology, translating from English into German. Our experiment shows a significant improvement of around 3 BLEU points compared to a generic as well as a domain-specific translation approach. 1 Introduction Currently, most of the semantically structured data, i.e. ontologies or taxonomies, has labels expressed in English only.1 On the one hand, the increasing amount of ontologies offers an excellent opportunity to link this knowledge together (G´omez-P´erez et al., 2013). On the other hand, non-English users may encounter difficulties when 1Based on (Gracia et al., 2012), around 80% of ontology labels indexed in Watson are English. using the ontological knowledge represented only in English. Furthermore, applications in information retrieval, question answering or knowledge management, that use monolingual ontologies are therefore limited to the language in which the ontology labels are stored. To make the ontological knowledge language-independent and accessible beyond language borders, these monolingual resources need to be transformed into multilingual knowledge bases. This multilingual enhancement can enable queries on documents beyond English, e.g. for cross-lingual business intelligence in the financial domain (O’Riain et al., 2013), providing information related to an ontology label, e.g. other intangible assets,2 in Spanish, German or Italian. The main challenge involved in building multilingual knowledge bases is, however, to bridge the gap between language-specific information and the language-independent semantic content of ontologies or taxonomies (Gracia et al., 2012). Since manual multilingual enhancement of ontologies is a very time consuming and expensive process, we engage an ontology-specific statistical machine translation (SMT) system to automatically translate the ontology labels. Due to the fact that ontology labels are usually highly domainspecific and stored only in knowledge representations (Chandrasekaran et al., 1999), the labels appear infrequent in parallel corpora, which are needed to build a domain-specific translation system with accurate translation candidates. Additionally, ambiguous labels built out of only a few words do often not express enough semantic or contextual information to guide the SMT system to translate a label into the targeted domain. This can be observed by domain-unadapted SMT systems, e.g. Google Translate, where ambiguous expressions, such as vessel stored in an medical ontology, are often translated into a generic do2ontology label stored in FINREP - FINancial REPorting 708 main as Schiff 3 in German (meaning ship or boat), but not into the targeted medical domain as Gef¨aß. Since ontologies may change over time, keeping up with these changes can be challenging for a human translator. Having in place an SMT system adapted to an ontology can therefore be very beneficial. In this work, we propose an approach to select the most relevant (parallel) sentences from a pool of generic sentences based on the lexical and semantic overlap with the ontology labels. The goal is to identify sentences that are domain-specific in respect of the target domain and contain as much as possible relevant words that can allow the SMT system to learn the translations of the monolingual ontology labels. For instance, with the sentence selection we aim to retain only parallel sentences where the English word injection is translated into the German language as Impfung in the medical domain, but not into Eind¨usung, belonging to the technical domain. This selection process aims to reduce the semantic noise in the translation process, since we try to avoid learning translation candidates that do not belong to the targeted domain. Nonetheless, some of the domain-specific ontology labels may not be automatically translatable with SMT, due to the fact that the bilingual information is missing and cannot be learned from the parallel sentences. Therefore we use the information contained in the DBpedia knowledge base (Lehmann et al., 2015) to improve the translation of expressions which are not known to the SMT system. We tested our approach on the medical domain translating from English to German, showing improvements of around 3 BLEU points compared to a generic as well as a domain-specific translation model. The remainder of this paper is organized as follows: Section 2 gives an overview of the related work done in the field of ontology translation within SMT. In Section 3, we present the methodology of parallel data selection and terminology identification to improve ontology label translation. Furthermore we show different methods of embedding domain-specific knowledge into SMT. In Experimental Setting, Section 4, we describe the ontology to be translated along the training data needed for SMT. Moreover we introduce existing approaches and give a description of metrics for automatic translation evaluation. Section 5 3Translation performed on 25.02.2015 presents the automatic and manual evaluation of the translated labels. Finally, conclusions and future work are shown in Section 6. 2 Related Work The task of ontology translation involves the finding of an appropriate translation for the lexical layer, i.e. labels, of the ontology. Most of the previous work tackled this problem by accessing multilingual lexical resources, e.g. EuroWordNet or IATE (Declerck et al., 2006; Cimiano et al., 2010). Their work focuses on the identification of the lexical overlap between the ontology and the multilingual resource. Since the replacement of the source and target vocabulary guarantees a high precision but a low recall, external translation services, e.g. BabelFish, SDL FreeTranslation tool or Google Translate, were used to overcome this issue (Fu et al., 2009; Espinoza et al., 2009). Additionally, ontology label disambiguation was performed by (Espinoza et al., 2009) and (McCrae et al., 2011), where the structure of the ontology along with existing multilingual ontologies was used to annotate the labels with their semantic senses. Differently to the aforementioned approaches, which rely on external knowledge or services, we focus on how to gain adequate translations using a small, but ontology-specific SMT system. We learned that using external SMT services often results in wrong translations of labels, because the external SMT services are not able to adapt to the specificity of the ontology. Avoiding existing multilingual resources, which enables a simple replacement of source and target labels, showed the possibility of improving label translations without manually generated lexical resources, since not every ontology may benefit of current multilingual resources. Due to the specificity of the labels, previous research (Wu et al., 2008; Haddow and Koehn, 2012) showed that generic SMT systems, which merge all accessible data together, cannot be used to translate domain-specific vocabulary. To avoid unsatisfactory translations of specific vocabulary we have to provide the SMT system domainspecific bilingual knowledge, from where it can learn specific translation candidates. (Eck et al., 2004) used for the language model adaptation within SMT the information retrieval technique tf-idf. Similarly, (Hildebrand et al., 2005) and (L¨u et al., 2007) utilized this approach to select 709 relevant sentences from available parallel text to adapt translation models. The results confirmed that large amounts of generic training data cannot compensate for the requirement of domainspecific training sentences. Another approach is taken by (Moore and Lewis, 2010), where, based on source and target language models, the authors calculated the difference of the cross-entropy values for a given sentence. (Axelrod et al., 2011) extend this work using the bilingual difference of cross-entropy on in-domain and out-of-domain language models for training sentence selection for SMT. (Wuebker et al., 2014) reused the crossentropy approach and applied it to the translation of video lectures. (Kirchhoff and Bilmes, 2014) introduce submodular optimization using complex features for parallel sentence selection. In their experiments they use the source and target side of the text to be translated, and show significant improvements over the widely used cross-entropy method. A different approach for sentence selection is shown in (Cuong and Sima’an, 2014), where the authors propose a latent domain translation model to distinguish between hidden in- and out-of-domain data. (Gasc´o et al., 2012) and (Bicici and Yuret, 2011) sub-sample sentence pairs whose source has most overlap with the evaluation dataset. Different from these approaches, we do not embed any specific in-domain knowledge to the generic corpus, from which sentence selection is performed. Furthermore, none of these methods explicitly exploit the ontological hierarchy for label disambiguation and are not specifically designed to deal with the characteristics of ontology labels. As a lexical resource, Wikipedia with its rich semantic knowledge was used as a resource for bilingual term identification in the context of SMT. (Tyers and Pieanaar, 2008) extracts bilingual dictionary entries from Wikipedia to support the machine translation system. Based on exact string matching they query Wikipedia with a list of around 10,000 noun lemmas to generate the bilingual dictionary. Besides the interwiki link system, (Erdmann et al., 2009) enhance their bilingual dictionary by using redirection page titles and anchor text within Wikipedia. To cast the problem of ambiguous Wikipedia titles, (Niehues and Waibel, 2011; Arcan et al., 2014a) use the information of Wikipedia categories and the text of the articles to provide the SMT system domain-specific bilingual knowledge. This research showed that using the lexical information stored in this knowledge base improves the translation of highly domain-specific vocabulary. However, we do not rely on category annotations of Wikipedia articles, but perform domain-specific dictionary generation based on the overlap between related words from the ontology label and the abstract of a Wikipedia article. 3 Methodology We propose an approach that uses the ontology labels to be translated to select the most relevant parallel sentences from a generic parallel corpus. Since ontology labels tend to be short (McCrae et al., 2011), we expand the label representation with its semantically related words. This expansion enables a larger semantic overlap between a label and the (parallel) sentences, which gives us more information to distinguish between related and unrelated sentences. Our approach reduces the ambiguity of expressions in the selected parallel sentences, which consequently gives more preference to translation candidates of the targeted domain. Furthermore, we access the DBpedia knowledge base to identify bilingual terminology belonging to the domain of the ontology. Once the domain-specific parallel sentences and lexical knowledge is available, we use different techniques to embed this knowledge into the SMT system. These methods are detailed in the following subsections. 3.1 Domain-Specific Parallel Sentence Selection In order to generate the best translation system we select only sentences from the generic parallel corpus which are most relevant to the labels to be translated. The first criteria for relevance was the n-gram overlap between a label and a source sentence coming from the generic corpus. Therefore we calculate the cosine similarity between the ngrams extracted from a label and the n-grams of each source sentence in the generic corpus. The similarity between the label and the sentence is defined as the cosine of the angle between the two vectors. The calculated similarity score allows us to distinguish between more and less relevant sentences. Due to the specificity of ontology labels, the ngram overlap approach is not able to select useful sentences in the presence of short labels. For 710 this reason, we improve it by extending the semantic information of labels using a technique for computing vector representations of words. The technique is based on a neural network that analyses the textual data provided as input and provides as output a list of semantically related words (Mikolov et al., 2013). Each input string is vectorized using the surrounding context and compared to other vectorized sets of words (from the training data) in a multi-dimensional vector space. For obtaining the vector representations we used a distributional semantic model trained on the Wikipedia articles,4 containing more than 3 billion words. Word relatedness is measured through the cosine similarity between two word vectors. A score of 1 would represent a perfect word similarity; e.g. cholera equals cholera, while the medical expression medicine has a cosine distance of 0.678 to cholera. Since words, which occur in similar contexts tend to have similar meanings (Harris, 1954), this approach enables to group related words together. The output of this technique is the analysed label with a vector attached to it, e.g. for the medical label cholera it provides related words with its relatedness value, e.g. typhus (0.869), smallpox (0.849), epidemic (0.834), dysentery (0.808) ... In our experiments, this method is implemented by the use of Word2Vec.5 To additionally disambiguate short labels, the related words of the current label are combined with the related words of its direct parent in the ontology. The usage of the ontology hierarchy allows us to take advantage of the specific vocabulary of the related words in the computation of the cosine similarity. Given a label and a source sentence from the generic corpus, related words and their weights are extracted from both of them and used as entries of the vectors passed to the cosine similarity. The most similar source sentence and the label should share the largest number of related words (largest cosine similarity). 3.2 Bilingual Terminology Identification The automatic translation of domain-specific vocabulary can be a hard task for a generic SMT system, if the bilingual knowledge is not present in the parallel dataset. To complement the previous approaches we access DBpedia6 as a multilingual lexical resource. 4Wikipedia dump id enwiki-20141106 5https://code.google.com/p/word2vec/ 6http://wiki.dbpedia.org/Downloads2014 We engage the idea of (Arcan et al., 2012) where the authors provide to the SMT system unambiguous terminology identified in Wikipedia to improve the translations of labels in the financial domain. To disambiguate Wikipedia entries with translations into different domains, they query the repository for analysing the n-gram overlap between the financial labels and the Wikipedia entries and store the frequency of categories which are associated with the matched entry. In a final step they extract only bilingual Wikipedia entries, which are associated with the most frequent Wikipedia categories identified in the previous step. Since the Wikipedia entries are often associated only with a few categories, this limited vocabulary may give only a small contribution for this disambiguation of different meanings or topics of the same Wikipedia entry. For this reason, we use for each Wikipedia entry the extended abstract, which contains more information about the entry compared to the previous approach. For ambiguous Wikipedia entries, which overlap with a medical label, we therefore calculate the cosine similarity between the related words associated with the label and the lexical information of the Wikipedia abstract. Among different ambiguous entries, the cosine similarity gives more weight to the Wikipedia entry, which is closer to our preferred domain. Finally, if the Wikipedia entry has an equivalent in the target language, i.e. German, we use the bilingual information for the lexical enhancement of the SMT system. 3.3 Integration of Domain-Specific Knowledge into SMT After the identification of domain-specific bilingual knowledge, it has to be integrated into the workflow of the SMT system. The injection of new obtained knowledge can be performed by retraining the domain-specific knowledge with the generic parallel corpus (Langlais, 2002; Ren et al., 2009; Haddow and Koehn, 2012) or by adding new entries directly to the translation system (Pinnis et al., 2012; Bouamor et al., 2012). These methods have the drawback that the bilingual domain specificity may get lost due to the usually larger generic parallel corpora. Giving more priority to domain-specific translations than generic ones, we focus on two techniques, i.e. the Fill-Up model (Bisazza et al., 2011) and the Cache-Based 711 Model (Bertoldi et al., 2013) approach. The Fill-Up model has been developed to address a common scenario where a large generic background model exists, and only a small quantity of domain-specific data can be used to build a translation model. Its goal is to leverage the large coverage of the background model, while preserving the domain-specific knowledge coming from the domain-specific data. For this purpose the generic and the domain-specific translation models are merged. For those translation candidates that appear in both models, only one instance is reported in the Fill-Up model with the largest probabilities according to the translation models. To keep track of a translation candidate’s provenance, a binary feature is added that gives preference to a translation candidate if it comes from the domain-specific translation model. We engage the idea of the Fill-Up model to combine the domain-specific parallel knowledge from the selected sentences with the generic (1.9M) parallel corpus. Furthermore, for embedding bilingual lexical knowledge into the SMT system, we engage the idea of cache-based translation and language models (Bertoldi et al., 2013). The main idea behind these models is to combine a large static global model with a small, but dynamic local model. This approach has already shown its potential of injecting domain-specific knowledge into a generic SMT system (Arcan et al., 2014b). For our experiments we inject the bilingual lexical knowledge identified in DBpedia and IATE into the cachebased models. The cache-based model relies on a local translation model (CBTM) and language model (CBLM). The first is implemented as an additional table in the translation model providing one score. All entries are associated with an ’age’ (initially set to 1), corresponding to the time when they were actually inserted. Each new insertion causes an ageing of the existing translation candidates and hence their re-scoring; in case of re-insertion of a phrase pair, the old value is set to the initial value. Similarly to the CBTM, the local language model is built to give preference to the provided target expressions. Each entry stored in CBLM is associated with a decaying function of the age of insertion into the model. Both models are used as additional features of the log-linear model in the SMT system. 4 Experimental Setting In this Section, we give an overview on the dataset and the translation toolkit used in our experiment. Furthermore, we describe the existing approaches and give insights into the SMT evaluation techniques, considering the translation direction from English to German. Evaluation Dataset For our experiments we used the International Classification of Diseases (ICD) ontology as the gold standard,7 whereby the considered translation direction is from English to German. The ICD ontology, translated into 43 languages, is used to monitor diseases and to report the general health situation of the population in a country. This stored information also provides an overview of the national mortality rate and appearance of diseases of WHO member countries. For our experiment we used 2000 English labels from the ICD-10 dataset, which were aligned to their German equivalents (Table 1). To identify the best set of sentences we experiment with different values of τ, which is the percentage of all the sentences that are considered relevant (domainspecific) by the sentence extraction approach. The value that allows the SMT system to achieve the best performance on the development dataset 1 is used on the evaluation set, which is used for the translation evaluation of ontology labels reported in this paper. The parameters within the SMT system are optimized on the development dataset 2. Statistical Machine Translation and Training Dataset For our translation task, we use the statistical translation toolkit Moses (Koehn et al., 2007), where the word alignments were built with the GIZA++ toolkit (Och and Ney, 2003). The SRILM toolkit (Stolcke, 2002) was used to build the 5-gram language model. For a broader domain coverage of the generic training dataset necessary for the SMT system, we merged parts of JRC-Acquis 3.08 (Steinberger et al., 2006), Europarl v79 (Koehn, 2005) and OpenSubtitles201310 (Tiedemann, 2012), obtaining a training corpus of 1.9M sentences, con7http://www.who.int/classifications/ icd/en/ 8https://ec.europa.eu/jrc/en/ language-technologies/jrc-acquis 9http://www.statmt.org/europarl/ 10http://opus.lingfil.uu.se/ OpenSubtitles2013.php 712 English German Generic Dataset Sentences 1.9M (out-domain) Running Words 39.8M 37.1M Vocabulary 195,912 446,068 EMEA Dataset Sentences 1.1M (domain-specific) Running Words 13.8M 12.7M Vocabulary 58,935 115,754 Development Labels 500 Dataset 1 Running Words 3,025 2,908 Vocabulary 889 951 Development Labels 500 Dataset 2 Running Words 3,003 3,020 Vocabulary 938 1,027 Evaluation Labels 1,000 Dataset Running Words 5,677 5,514 Vocabulary 1,255 1,489 Table 1: Statistics for the bilingual training, development and evaluation datasets. (’Vocabulary’ denotes the number of unique words in the dataset) taining around 38M running words (Table 1).11 The generic SMT system, trained on the concatenated 1.9 sentences, is used as a baseline, which we compare against the domain-specific models generated with different sentence selection methods. Furthermore we use the generic SMT system in combination with the smaller domainspecific models to evaluate different approaches when combining generic and domain-specific data together. We additionally compare our results to an SMT system built on an existing domain-specific parallel dataset, i.e. EMEA12 (Tiedemann, 2009), which holds specific medical parallel data extracted from the European Medicines Agency documents and websites. Comparison to Existing Approaches We compare our approach on knowledge expansion for sentence selection with similar methods that distinguish between more important sentences and less important ones. First, we sort 1.9M sentences from the generic corpus based on the perplexity of the ontology vocabulary. The perplexity score gives a notion of how well the probability model based on the ontology vocabulary predicts a sample, which is in our case each sentence in the generic corpus. Second, we use the method shown in (Hildebrand et al., 2005), where the authors use a method 11For reproducibility and future evaluation we take the first one-third part of each corpus. 12http://opus.lingfil.uu.se/EMEA.php based on tf-idf 13 to select the most relevant sentences. This widely-used method in information retrieval tells us how important a word is to a document, whereby each sentence from the generic corpus is treated as a document. Finally, we compare our approach with the infrequent n-gram recovery method, described in (Gasc´o et al., 2012). Their technique consists of selection of relevant sentences from the generic corpus, which contain infrequent n-grams based on their test data. They consider an n-gram as infrequent if it appears in the generic corpus less times than an infrequent threshold t. Furthermore we enrich and evaluate our proposed ontology-specific SMT system with the lexical information coming from the terminological database IATE14 (Inter-Active Terminology for Europe). IATE is the institutional terminology database of the EU and is used for the collection, dissemination and shared management of specific terminology and contains approximately 1.4 million multilingual entries. Evaluation Metrics The automatic translation evaluation is based on the correspondence between the SMT output and reference translation (gold standard). For the automatic evaluation we used the BLEU (Papineni et al., 2002) and METEOR (Denkowski and Lavie, 2014) algorithms.15 BLEU (Bilingual Evaluation Understudy) is calculated for individual translated segments (ngrams) by comparing them with a dataset of reference translations. Considering the shortness of the labels, we report scores based on the bi-gram overlap (BLEU-2) and the standard four-gram overlap (BLEU-4). Those scores, between 0 and 100 (perfect translation), are then averaged over the whole evaluation dataset to reach an estimate of the translation’s overall quality. METEOR (Metric for Evaluation of Translation with Explicit ORdering) is based on the harmonic mean of precision and recall, whereby recall is weighted higher than precision. Along with standard exact word (or phrase) matching it has additional features, i.e. stemming, paraphrasing and synonymy matching. Differently to BLEU, the metric produces good correlation with human judgement at the sentence or segment level. 13tf-idf – term frequency-inverse document frequency 14http://iate.europa.eu/downloadTbx.do 15METEOR configuration: exact, stem, paraphrase 713 The approximate randomization approach in MultEval (Clark et al., 2011) is used to test whether differences among system performances are statistically significant with a p-value < 0.05. 5 Evaluation of Ontology Labels In this Section, we report the translation quality of ontology labels based on translation systems learned from different sentence selection methods. Additionally, we perform experiments training an SMT system on the combination of in- and outdomain knowledge. The final approach enhances a domain-specific translation system with lexical knowledge identified in IATE or DBpedia. 5.1 Automatic Translation Evaluation We report the automatic evaluation based on BLEU and METEOR for the sentence selection techniques, the combination of in- and out-domain data and the lexical enhancement of SMT. Sentence Selection Techniques As a first evaluation, we automatically compare the quality of the ICD labels translated with different SMT systems trained on specific sentences by the aforementioned selection techniques (Table 2). Due to the in-domain bilingual knowledge, the translation system trained using the EMEA dataset performs slightly better compared to the large generic baseline system. Among the different sentence selection approaches, the infrequent n-gram recovery method (infreq. in Table 2) outperforms the baselines and all the other techniques. This is due to the very strict criteria of selecting relevant sentences that allows the infrequent n-gram recovery method to identify a limited number (20,000) of highly ontology-specific bilingual sentences. The related words and the n-gram overlap models perform slightly better than the baseline, with a usage of 81,000 and 59,000 relevant sentences, and perform similarly to the in-domain EMEA translation system. Further translation quality improvement is possible, if sentence selection methods are combined together (last four rows in Table 2). The cosine similarities of the methods are combined together, whereby new thresholds τ are computed on the development dataset 1 and applied on the ICD evaluation dataset. Each combined method showed improvement compared to the stand-alone method. The best overall performance is obtained Dataset Type Size BLEU-2 BLEU-4 METEOR Generic dataset 1.9M 17.2 6.6 24.7 EMEA dataset 1.1M 18.5 7.0 25.8 (1) perplexity 89K 17.5 6.8 24.8 (2) tf-idf 21K 12,6 4.9 18,7 (3) infreq. 20K 19.1 8.1 25.3 (4) related w. 81K 18.9 7.0 25.8 (5) n-gram 59K 17.7 7.1 23.3 (5) ∧(3) 22K 18.9 8.2* 25.1 (5) ∧(4) 24K 17.3 7.3 23.9 (3) ∧(4) 24K 18.4 8.4* 25.5* (5) ∧(4) ∧(3) 30K 20.1 8.9* 27.2* Table 2: Automatic translation evaluation on the evaluation dataset of the ICD ontology (Size = amount of selected sentences from the generic parallel corpus. bold results = best performance; *statistically significant compared to baseline) when combining the n-gram overlap, the semantic related words and infrequent n-gram recovery methods. With this combination, we reduce the amount of parallel sentences by 98% compared to the generic corpus and significantly outperform the baseline by 2.3 BLEU score points. These two factors confirm the capability of the combined approach of selecting only few ontology-specific bilingual sentences (30,000) that allows the SMT system to identify the correct translations in the target ontology domain. This is due to the fact that the three combined methods are quite complementary. In fact, the n-gram overlap method selects a relatively large amount of bilingual sentences with few words in common with the label, the related words approach identifies bilingual sentences in the ontology target domain, and the infrequent ngram recovery technique selects few bilingual sentences with only specific n-grams in common with the labels balancing the effect of the n-gram overlap method. Combining In- and Out-Domain Data Considering the relatively small amount of parallel data extracted with the sentence selecting methods for the SMT community, we evaluate different approaches that combine a large generic translation model with domain-specific data. For this purpose, we use the sentences selected by the best approach ((5)∧(4)∧(3)) in the previous experiments and combine them with the generic parallel dataset. We evaluate the translation performance when (i) concatenating the selected domain-specific parallel dataset with the generic 714 Dataset Type BLEU-2 BLEU-4 METEOR Generic dataset 17.2 6.6 24.7 (5)∧(4)∧(3) sent. selec. 20.1 8.9* 27.2* Data Concatenation (i) 18.1 6.8 24.1 Log-linear Models (ii) 18.9 8.1* 25.3 Fill-Up Model (iii) 17.7 7.0 24.7 (5)∧(4)∧(3) + IATE 19.8 9.0* 27.8* (5)∧(4)∧(3) + DBpedia(1) 20.6 9.1* 27.3* (5)∧(4)∧(3) + DBpedia(2) 21.0 9.6*3 28.2*3 Table 3: Evaluation of the ICD ontology evaluation dataset combining domain-specific with generic parallel knowledge and lexical enhancement of SMT using IATE and DBpedia (bold results = best performance; *statistically significant compared to baseline; 3statistically significant compared to best sentence selection model) parallel one, (ii) combining the generated translation models from the selected domain-specific parallel dataset and the generic corpus and (iii) applying the Fill-Up model to emphasise the domainspecific data in a single translation model. The translation performance of the combination methods are shown in Table 3. It is interesting to notice that none of them benefits from the use of the additional generic parallel data showing translation performance smaller than the domainspecific model. Although all methods outperform the generic translation model, only the log-linear approach, keeping in- and out-domain translation models separated, shows significant improvement. Comparing it to the combined sentence selection technique ((5)∧(4)∧(3)) does not show any statistical significant differences between the approaches. We conclude that the generic corpus is too large compared to the selected in-domain corpus, nullifying the influence of the extracted domain-specific parallel knowledge. Lexical enhancement for SMT Since the outof-vocabulary problem can be only mitigated with sentence selection, we accessed lexical resources IATE and DBpedia to further improve the translations of the medical labels. Based on the word overlap between labels and entries in IATE we extracted 11,641 English lexical entries with its equivalent in German. The DBpedia(1) approach, which disambiguates DBpedia entries based on the (Wikipedia article) categories (Arcan et al., 2012), identified 7,911 English-German expression for the targeted domain, while the abstract based disambiguation approach, marked as DBpedia(2) in Table 3 identified 3,791 bilingual entries. The lexical enhanced models further improved the translations of the medical labels (last three rows in Table 3) due to the additional bilingual information from the lexical resources, which is missing in the standalone sentence selection model. Comparing the ICD evaluation dataset and the translations generated with the DBpedia(2) lexical enhanced model we observed that more than 80 labels benefit from the additional lexical knowledge, e.g. correcting the mistranslated ”adrenal gland” into ”Nebenniere”. The lexical extraction and disambiguation of bilingual knowledge based on the abstract of the article compared to the article categories further improves the lexical choice, helping SMT systems to improve the translation of ontology labels. 5.2 Manual Evaluation of Translated Labels Since ontologies store specific vocabulary about a domain, this vocabulary is adapted to a concrete language and culture community (Cimiano et al., 2010). In order to investigate to what extent the automatically generated translations differ from a translator’s adapted point of view, we manually inspected the translations produced by the sentence selection approaches described in Section 5.1. While analysing the English and German part of the ICD ontology gold standard we noticed significant differences in the translations of the medical labels. As a result of the language and cultural adaptation, many labels in the ICD ontology were not always translated literally, i.e. parts of a label were semantically merged, omitted or new information was added while crossing the language border. For example, the ICD label ”acute kidney failure and chronic kidney disease” is stored in the German part of the ontology as ”Niereninsuffizienz”.16 Although none of the translation systems can generate the compounded medical expression for German, the SMT system generated nevertheless an acceptable translation, i.e. ”akutes Nierenversagen und chronischer Nierenerkrankungen”.17 A more extreme example is the English label ”slipping, tripping, stumbling and falls”, in the German ICD ontology represented as 16Niereninsuffizienz←kidney insufficiency 17akutes←acute, Nierenversagen←kidney failure, und←and, chronischer←chronic, Nierenerkrankungen←kidney disease 715 ”sonstige St¨urze auf gleicher Ebene”.18 The language and cultural adaptation is very active for this example, where the whole English label is semantically merged into the word ”St¨urze”, meaning ”falls”. Additionally, the German part holds more information within the label, i.e. ”auf gleicher Ebene” (en. ”at the same level”), which is not represented on the English side. Since the SMT system will always try to translate every phrase (word or word segments) into the target language, an automatic translation evaluation cannot reflect the overall SMT performance. Further we detected a large error class caused by compounding, a common linguistic feature of German. Although the phrase ”heart diseases” with its reference translation ”Herzkrankheiten” appears frequent in the generic training dataset, the SMT system prefers to translate it word by word into ”Herz Krankheiten”. 19 Similar observations were made with ”upper arm” (German ”Oberarm”) with the SMT word to word translation ”oberen Arm”. Finally, we analysed the impact of the semantically enriched sentence selection with related words coming from Word2Vec compared to the surface based sentence selection, e.g. preplexity, infrequent n-gram recovery or n-gram overlap. Since semantically enriched selection stored the most relevant sentences, we observed the correct translation of the label ”blood vessels” into ”Blutgef¨aße”. The generic and other surface based selections translated the expression individually into ”Blut Schiffe”, where ”Schiffe” refers to the more common English word ”ship”, but not to ’part of the system transporting blood throughout our body’. The last example illustrates further the semantic mismatch between the training domain and the test domain. Using the generic model, built mainly out of European laws and parliament discussions (JRC-Acquis/Europarl) the word ”head” inside the label ”injury of head” is wrongly translated into the word ”Leiter”, meaning ”leader” in the legal domain. Nevertheless, the additional semantic information prevents storing wrong parallel sentences and guides the SMT to the correct translation, i.e. ”Sch¨adigung des Kopfes”.20 18sonstige←other, St¨urze←falls, auf←on, gleicher←same, Ebene←level 19Herz←heart, Krankheiten←diseases 20Sch¨adigung←injury, des←of, Kopfes←head 6 Conclusion In this paper we presented an approach to identify the most relevant sentences from a large generic parallel corpus, giving the possibility to translate highly specific ontology labels without particular in-domain parallel data. We enhanced furthermore the translation system build on the in-domain parallel knowledge with additional lexical knowledge accessing DBpedia. With the aim to better select relevant bilingual knowledge for SMT, we extend previous sentence and lexical selection techniques with additional semantic knowledge. Our proposed ontology-specific SMT system showed a statistical significant improvement (up to 3 BLEU points) of ontology label translation over the compared translation approaches. In future, we plan to integrate a larger diversity of surface, semantic and linguistic information for relevant sentence selection. Although the SMT system is capable of translating several words into a compound word, the small amount of the selected sentences limits this capability. To improve the ontology label translations, we therefore see the need to focus more on the German compound feature. Additionally we observed that more than 25% of the identified lexical knowledge consists of multi-word-expressions, e.g. ”fatal familial insomnia”. For this reason, our ongoing work focuses on the alignment of nested knowledge inside those expressions. To move further in this direction, we plan to focus on exploiting morphological term variations taking advantage of the alternative terms provided by DBpedia. Acknowledgments This publication has emanated from research supported in part by a research grant from Science Foundation Ireland (SFI) under Grant Number SFI/12/RC/2289 (Insight) and the European Union supported projects LIDER (ICT-2013.4.1-610782) and MixedEmotions (H2020-644632). References Arcan, M., Federmann, C., and Buitelaar, P. (2012). Experiments with term translation. In Proceedings of the 24th International Conference on Computational Linguistics, Mumbai, India. Arcan, M., Giuliano, C., Turchi, M., and Buitelaar, P. (2014a). Identification of Bilingual Terms 716 from Monolingual Documents for Statistical Machine Translation. In Proceedings of the 4th International Workshop on Computational Terminology (Computerm), Dublin, Ireland. Arcan, M., Turchi, M., Tonelli, S., and Buitelaar, P. (2014b). Enhancing statistical machine translation with bilingual terminology in a cat environment. In Proceedings of the 11th Conference of the Association for Machine Translation in the Americas, Vancouver, Canada. Axelrod, A., He, X., and Gao, J. (2011). Domain adaptation via pseudo in-domain data selection. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, Stroudsburg, PA, USA. Bertoldi, N., Cettolo, M., and Federico, M. (2013). Cache-based Online Adaptation for Machine Translation Enhanced Computer Assisted Translation. In Proceedings of Machine Translation Summit XIV, Nice, France. Bicici, E. and Yuret, D. (2011). Instance selection for machine translation using feature decay algorithms. In Proceedings of the Sixth Workshop on Statistical Machine Translation, Edinburgh, Scotland. Bisazza, A., Ruiz, N., and Federico, M. (2011). Fill-up versus Interpolation Methods for Phrase-based SMT Adaptation. In Proceedings of IWSLT. Bouamor, D., Semmar, N., and Zweigenbaum, P. (2012). Identifying bilingual multi-word expressions for statistical machine translation. In Proceedings of the Eight International Conference on Language Resources and Evaluation, Istanbul, Turkey. Chandrasekaran, B., Josephson, J. R., and Benjamins, V. R. (1999). What are ontologies, and why do we need them? IEEE Intelligent Systems, 14(1):20–26. Cimiano, P., Montiel-Ponsoda, E., Buitelaar, P., Espinoza, M., and G´omez-P´erez, A. (2010). A note on ontology localization. Appl. Ontol., 5(2):127–137. Clark, J., Dyer, C., Lavie, A., and Smith, N. (2011). Better Hypothesis Testing for Statistical Machine Translation: Controlling for Optimizer Instability . In Proceedings of the Association for Computational Lingustics. Cuong, H. and Sima’an, K. (2014). Latent domain translation models in mix-of-domains haystack. In Proceedings of the 25th International Conference on Computational Linguistics, Dublin, Ireland. Declerck, T., P´erez, A. G., Vela, O., Gantner, Z., Manzano, D., and D-Saarbr¨ucken (2006). Multilingual lexical semantic resources for ontology translation. In In Proceedings of the 5th International Conference on Language Resources and Evaluation. Denkowski, M. and Lavie, A. (2014). Meteor universal: Language specific translation evaluation for any target language. In Proceedings of the EACL 2014 Workshop on Statistical Machine Translation. Eck, M., Vogel, S., and Waibel, A. (2004). Language model adaptation for statistical machine translation based on information retrieval. In Proc. of LREC. Erdmann, M., Nakayama, K., Hara, T., and Nishio, S. (2009). Improving the extraction of bilingual terminology from wikipedia. ACM Trans. Multimedia Comput. Commun. Appl., 5(4). Espinoza, M., Montiel-Ponsoda, E., and G´omez-P´erez, A. (2009). Ontology localization. In Proceedings of the Fifth International Conference on Knowledge Capture, K-CAP ’09, New York, NY, USA. ACM. Fu, B., Brennan, R., and O’Sullivan, D. (2009). Crosslingual ontology mapping - an investigation of the impact of machine translation. In G´omez-P´erez, A., Yu, Y., and Ding, Y., editors, ASWC, volume 5926 of Lecture Notes in Computer Science. Springer. Gasc´o, G., Rocha, M.-A., Sanchis-Trilles, G., Andr´esFerrer, J., and Casacuberta, F. (2012). Does more data always yield better translations? In Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL ’12, Stroudsburg, PA, USA. G´omez-P´erez, A., Vila-Suero, D., Montiel-Ponsoda, E., Gracia, J., and Aguado-de Cea, G. (2013). Guidelines for multilingual linked data. In Proceedings of the 3rd International Conference on Web Intelligence, Mining and Semantics. ACM. Gracia, J., Montiel-Ponsoda, E., Cimiano, P., G´omezP´erez, A., Buitelaar, P., and McCrae, J. (2012). Challenges for the multilingual web of data. Web Semantics: Science, Services and Agents on the World Wide Web, 11. Haddow, B. and Koehn, P. (2012). Analysing the Effect of Out-of-Domain Data on SMT Systems. In Proceedings of the Seventh Workshop on Statistical Machine Translation, Montr´eal, Canada. Harris, Z. (1954). Distributional structure. Word, 10(23). Hildebrand, A. S., Eck, M., Vogel, S., and Waibel, A. (2005). Adaptation of the translation model for statistical machine translation based on information retrieval. In Proceedings of the 10th Conference of the European Association for Machine Translation (EAMT), Budapest. Kirchhoff, K. and Bilmes, J. (2014). Submodularity for data selection in machine translation. In Empirical Methods in Natural Language Processing (EMNLP). 717 Koehn, P. (2005). Europarl: A Parallel Corpus for Statistical Machine Translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86. AAMT. Koehn, P., Hoang, H., Birch, A., Callison-Burch, C., Federico, M., Bertoldi, N., Cowan, B., Shen, W., Moran, C., Zens, R., Dyer, C., Bojar, O., Constantin, A., and Herbst, E. (2007). Moses: Open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, Stroudsburg, PA, USA. Langlais, P. (2002). Improving a general-purpose statistical translation engine by terminological lexicons. In Proceedings of the 2nd International Workshop on Computational Terminology (COMPUTERM) ’2002, Taipei, Taiwan. Lehmann, J., Isele, R., Jakob, M., Jentzsch, A., Kontokostas, D., Mendes, P. N., Hellmann, S., Morsey, M., van Kleef, P., Auer, S., and Bizer, C. (2015). DBpedia - a large-scale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal, 6(2):167–195. L¨u, Y., Huang, J., and Liu, Q. (2007). Improving statistical machine translation performance by training data selection and optimization. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL). McCrae, J., Espinoza, M., Montiel-Ponsoda, E., Aguado-de Cea, G., and Cimiano, P. (2011). Combining statistical and semantic approaches to the translation of ontologies and taxonomies. In Fifth workshop on Syntax, Structure and Semantics in Statistical Translation (SSST-5). Mikolov, T., Chen, K., Corrado, G., and Dean, J. (2013). Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Moore, R. C. and Lewis, W. (2010). Intelligent selection of language model training data. In Proceedings of the ACL 2010 Conference Short Papers, ACLShort ’10, Stroudsburg, PA, USA. Niehues, J. and Waibel, A. (2011). Using Wikipedia to Translate Domain-specific Terms in SMT. In International Workshop on Spoken Language Translation, San Francisco, CA, USA. Och, F. J. and Ney, H. (2003). A systematic comparison of various statistical alignment models. Computational Linguistics, 29. O’Riain, S., Coughlan, B., Buitelaar, P., Declerck, T., Krieger, U., and Thomas, S. M. (2013). Crosslingual querying and comparison of linked financial and business data. In Cimiano, P., Fern´andez, M., Lopez, V., Schlobach, S., and V¨olker, J., editors, ESWC (Satellite Events), volume 7955 of Lecture Notes in Computer Science. Springer. Papineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). BLEU: a method for automatic evaluation of machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, ACL ’02, pages 311–318. Pinnis, M., Ljubeˇsi´c, N., S¸tef˘anescu, D., Skadin¸a, I., Tadi´c, M., and Gornostay, T. (2012). Term extraction, tagging, and mapping tools for under-resourced languages. In Proceedings of the Terminology and Knowledge Engineering (TKE2012) Conference. Ren, Z., L¨u, Y., Cao, J., Liu, Q., and Huang, Y. (2009). Improving statistical machine translation using domain bilingual multiword expressions. In Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications, MWE ’09, Stroudsburg, PA, USA. Steinberger, R., Pouliquen, B., Widiger, A., Ignat, C., Erjavec, T., Tufis, D., and Varga, D. (2006). The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC’2006). Stolcke, A. (2002). SRILM - An extensible language modeling toolkit. In Proceedings International Conference on Spoken Language Processing. Tiedemann, J. (2009). News from OPUS - A collection of multilingual parallel corpora with tools and interfaces. In Nicolov, N., Bontcheva, K., Angelova, G., and Mitkov, R., editors, Recent Advances in Natural Language Processing, volume V. John Benjamins, Amsterdam/Philadelphia, Borovets, Bulgaria. Tiedemann, J. (2012). Parallel data, tools and interfaces in opus. In Chair), N. C. C., Choukri, K., Declerck, T., Do˘gan, M. U., Maegaard, B., Mariani, J., Odijk, J., and Piperidis, S., editors, Proceedings of the Eight International Conference on Language Resources and Evaluation, Istanbul, Turkey. Tyers, F. M. and Pieanaar, J. A. (2008). Extracting bilingual word pairs from wikipedia. In Collaboration: interoperability between people in the creation of language resources for less-resourced languages (A SALTMIL workshop). Wu, H., Wang, H., and Zong, C. (2008). Domain adaptation for statistical machine translation with domain dictionary and monolingual corpora. In Proceedings of the 22nd International Conference on Computational Linguistics - Volume 1, COLING ’08. Wuebker, J., Ney, H., Mart´ınez-Villaronga, A., Gim´enez, A., , Juan, A., Servan, C., Dymetman, M., and Mirkin, S. (2014). Comparison of Data Selection Techniques for the Translation of Video Lectures. In Proc. of the Eleventh Biennial Conf. of the Association for Machine Translation in the Americas (AMTA-2014), Vancouver (Canada). 718
2015
69
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 63–73, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics MultiGranCNN: An Architecture for General Matching of Text Chunks on Multiple Levels of Granularity Wenpeng Yin and Hinrich Sch¨utze Center for Information and Language Processing University of Munich, Germany [email protected] Abstract We present MultiGranCNN, a general deep learning architecture for matching text chunks. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one chunk can be directly compared to longer sequences in the other chunk. MultiGranCNN also contains a flexible and modularized match feature component that is easily adaptable to different types of chunk matching. We demonstrate stateof-the-art performance of MultiGranCNN on clause coherence and paraphrase identification tasks. 1 Introduction Many natural language processing (NLP) tasks can be posed as classifying the relationship between two TEXTCHUNKS (cf. Li et al. (2012), Bordes et al. (2014b)) where a TEXTCHUNK can be a sentence, a clause, a paragraph or any other sequence of words that forms a unit. Paraphrasing (Figure 1, top) is one task that we address in this paper and that can be formalized as classifying a TEXTCHUNK relation. The two classes correspond to the sentences being (e.g., the pair <p, q+>) or not being (e.g., the pair <p, q−>) paraphrases of each other. Another task we look at is clause coherence (Figure 1, bottom). Here the two TEXTCHUNK relation classes correspond to the second clause being (e.g., the pair <x, y+>) or not being (e.g., the pair <x, y−>) a discourse-coherent continuation of the first clause. Other tasks that can be formalized as TEXTCHUNK relations are question answering (QA) (is the second chunk an answer to the first?), textual inference (does the first chunk imply the second?) and machine translation (are the two chunks translations of each other?). p PDC will also almost certainly fan the flames of speculation about Longhorn’s release. q+ PDC will also almost certainly reignite speculation about release dates of Microsoft ’s new products. q−PDC is indifferent to the release of Longhorn. x The dollar suffered its worst one-day loss in a month, y+ falling to 1.7717 marks . . . from 1.7925 marks yesterday. y−up from 112.78 yen in late New York trading yesterday. Figure 1: Examples for paraphrasing and clause coherence tasks In this paper, we present MultiGranCNN, a general architecture for TEXTCHUNK relation classification. MultiGranCNN can be applied to a broad range of different TEXTCHUNK relations. This is a challenge because natural language has a complex structure – both sequential and hierarchical – and because this structure is usually not parallel in the two chunks that must be matched, further increasing the difficulty of the task. A successful detection algorithm therefore needs to capture not only the internal structure of TEXTCHUNKS, but also the rich pattern of their interactions. MultiGranCNN is based on two innovations that are critical for successful TEXTCHUNK relation classification. First, the architecture is designed to ensure multigranular comparability. For general matching, we need the ability to match short sequences in one chunk with long sequences in the other chunk. For example, what is expressed by a single word in one chunk (“reignite” in q+ in the figure) may be expressed by a sequence of several words in its paraphrase (“fan the flames of” in p). To meet this objective, we learn representations for words, phrases and the entire sentence that are all mutually comparable; in particular, these representations all have the same dimensionality and live in the same space. Most prior work (e.g., Blacoe and Lapata (2012; Hu et al. (2014)) has neglected the need for multigranular comparability and performed matching within fixed levels only, e.g., only words were 63 matched with words or only sentences with sentences. For a general solution to the problem of matching, we instead need the ability to match a unit on a lower level of granularity in one chunk with a unit on a higher level of granularity in the other chunk. Unlike (Socher et al., 2011), our model does not rely on parsing and it can more exhaustively search the hypothesis space of possible matchings, including matchings that correspond to conflicting segmentations of the input chunks (see Section 5). Our second contribution is that MultiGranCNN contains a flexible and modularized match feature component. This component computes the basic features that measure how well phrases of the two chunks match. We investigate three different match feature models that demonstrate that a wide variety of different match feature models can be implemented. The match feature models can be swapped in and out of MultiGranCNN, depending on the characteristics of the task to be solved. Prior work that has addressed matching tasks has usually focused on a single task like QA (Bordes et al., 2014a; Yu et al., 2014) or paraphrasing (Socher et al., 2011; Madnani et al., 2012; Ji and Eisenstein, 2013). The ARC architectures proposed by Hu et al. (2014) are intended to be more general, but seem to be somewhat limited in their flexibility to model different matching relations; e.g., they do not perform well for paraphrasing. Different match feature models may also be required by factors other than the characteristics of the task. If the amount of labeled training data is small, then we may prefer a match feature model with few parameters that is robust against overfitting. If there is lots of training data, then a richer match feature model may be the right choice. This motivates the need for an architecture like MultiGranCNN that allows selection of the taskappropriate match feature model from a range of different models and its seamless integration into the architecture. In remaining parts, Section 2 introduces some related work; Section 3 gives an overview of the proposed MultiGranCNN; Section 4 shows how to learn representations for generalized phrases (gphrases); Section 5 describes the three matching models: DIRECTSIM, INDIRECTSIM and CONCAT; Section 6 describes the two 2D pooling methods: grid-based pooling and phrase-based pooling; Section 7 describes the match feature CNN; Section 8 summarizes the architecture of MultiGran CNN; and Section 9 presents experiments; finally, Section 10 concludes. 2 Related Work Paraphrase identification (PI) is a typical task of sentence matching and it has been frequently studied (Qiu et al., 2006; Blacoe and Lapata, 2012; Madnani et al., 2012; Ji and Eisenstein, 2013). Socher et al. (2011) utilized parsing to model the hierarchical structure of sentences and uses unfolding recursive autoencoders to learn representations for single words and phrases acting as nonleaf nodes in the tree. The main difference to MultiGranCNN is that we stack multiple convolution layers to model flexible phrases and learn representations for them, and aim to address more general sentence correspondence. Bach et al. (2014) claimed that elementary discourse units obtained by segmenting sentences play an important role in paraphrasing. Their conclusion also endorses (Socher et al., 2011)’s and our work, for both take interactions between component phrases into account. QA is another representative sentence matching problem. Yu et al. (2014) modeled sentence representations in a simplified CNN, finally finding the match score by projecting question and answer candidates into the same space. Other relevant QA work includes (Bordes et al., 2014c; Bordes et al., 2014a; Yang et al., 2014; Iyyer et al., 2014) For more general matching, Chopra et al. (2005) and Liu (2013) used a Siamese architecture of shared-weight neural networks (NNs) to model two objects simultaneously, matching their representations and then learning a specific type of sentence relation. We adopt parts of their architecture, but we model phrase representations as well as sentence representations. Li and Xu (2012) gave a comprehensive introduction to query-document matching and argued that query and document match at different levels: term, phrase, word sense, topic, structure etc. This also applies to sentence matching. Lu and Li (2013) addressed matching of short texts. Interactions between the two texts were obtained via LDA (Blei et al., 2003) and were then the basis for computing a matching score. Compared to MultiGranCNN, drawbacks of this approach are that LDA parameters are not optimized for the specific task and that the interactions are 64 formed on the level of single words only. Gao et al. (2014) modeled interestingness between two documents with deep NNs. They mapped source-target document pairs to feature vectors in a latent space in such a way that the distance between the source document and its corresponding interesting target in that space was minimized. Interestingness is more like topic relevance, based mainly on the aggregated meaning of keywords, as opposed to more structural relationships as is the case for paraphrasing and clause coherence. We briefly discussed (Hu et al., 2014)’s ARC in Section 1. MultiGranCNN is partially inspired by ARC, but introduces multigranular comparability (thus enabling crosslevel matching) and supports a wider range of match feature models. Our unsupervised learning component (Section 4, last paragraph) resembles word2vec CBOW (Mikolov et al., 2013), but learns representations of TEXTCHUNKS as well as words. It also resembles PV-DM (Le and Mikolov, 2014), but our TEXTCHUNK representation is derived using a hierarchical architecture based on convolution and pooling. 3 Overview of MultiGranCNN We use convolution-plus-pooling in two different components of MultiGranCNN. The first component, the generalized phrase CNN (gpCNN), will be introduced in Section 4. This component learns representations for generalized phrases (gphrases) where a generalized phrase is a general term for subsequences of all granularities: words, short phrases, long phrases and the sentence itself. The gpCNN architecture has L layers of convolution, corresponding (for L = 2) to words, short phrases, long phrases and the sentence. We test different values of L in our experiments. We train gpCNN on large data in an unsupervised manner and then fine-tune it on labeled training data. Using a Siamese configuration, two copies of gpCNN, one for each of the two input TEXTCHUNKS, are the input to the match feature model, presented in Section 5. This model produces s1 × s2 matching features, one for each pair of g-phrases in the two chunks, where s1, s2 are the number of g-phrases in the two chunks, respectively. The s1×s2 match feature matrix is first reduced to a fixed size by dynamic 2D pooling. The resulting fixed size matrix is then the input to the second convolution-plus-pooling component, the match feature CNN (mfCNN) whose output is fed to a multilayer perceptron (MLP) that produces the final match score. Section 6 will give details. We use convolution-plus-pooling for both word sequences and match features because we want to compute increasingly abstract features at multiple levels of granularity. To ensure that g-phrases are mutually comparable when computing the s1 × s2 match feature matrix, we impose the constraint that all g-phrase representations live in the same space and have the same dimensionality. Figure 2: gpCNN: learning g-phrase representations. This figure only shows two convolution layers (i.e., L = 2) for saving space. 4 gpCNN: Learning Representations for g-Phrases We use several stacked blocks, i.e., convolutionplus-pooling layers, to extract increasingly abstract features of the TEXTCHUNK. The input to the first block are the words of the TEXTCHUNK, represented by CW (Collobert and Weston, 2008) embeddings. Given a TEXTCHUNK of length |S|, let vector ci ∈Rwd be the concatenated embeddings of words vi−w+1, . . . , vi where w = 5 is the filter width, d = 50 is the dimensionality of CW embeddings and 0 < i < |S| + w. Embeddings for words vi, i < 1 and i > |S|, are set to zero. We then generate the representation pi ∈Rd of the g-phrase vi−w+1, . . . , vi using the convolution 65 matrix Wl ∈Rd×wd: pi = tanh(Wlci + bl) (1) where block index l = 1, bias bl ∈Rd. We use wide convolution (i.e., we apply the convolution matrix Wl to words vi, i < 1 and i > |S|) because this makes sure that each word vi, 1 ≤i ≤|S|, can be detected by all weights of Wl – as opposed to only the rightmost (resp. leftmost) weights for initial (resp. final) words in narrow convolution. The configuration of convolution layers in following blocks (l > 1) is exactly the same except that the input vectors ci are not words, but the output of pooling from the previous layer of convolution – as we will explain presently. The configuration is the same (e.g., all Wl ∈Rd×wd) because, by design, all g-phrase representations have the same dimensionality d. This also ensures that each g-phrase representation can be directly compared with each other g-phrase representation. We use dynamic k-max pooling to extract the kl top values from each dimension after convolution in the lth block and the kL top values in the final block. We set kl = max(α, ⌈L −l L |S|⌉) (2) where l = 1, · · · , L is the block index, and α = 4 is a constant (cf. Kalchbrenner et al. (2014)) that makes sure a reasonable minimum number of values is passed on to the next layer. We set kL = 1 (not 4, cf. Kalchbrenner et al. (2014)) because our design dictates that all g-phrase representations, including the representation of the TEXTCHUNK itself, have the same dimensionality. Example: for L = 4, |S| = 20, the ki are [15, 10, 5, 1]. Dynamic k-max pooling keeps the most important features and allows us to stack multiple blocks to extract hiearchical features: units on consecutive layers correspond to larger and larger parts of the TEXTCHUNK thanks to the subset selection property of pooling. For many tasks, labeled data for training gpCNN is limited. We therefore employ unsupervised training to initialize gpCNN as shown in Figure 2. Similar to CBOW (Mikolov et al., 2013), we predict a sampled middle word vi from the average of seven vectors: the TEXTCHUNK representation (the final output of gpCNN) and the three words to the left and to the right of vi. We use noise-contrastive estimation (Mnih and Teh, 2012) for training: 10 noise words are sampled for each true example. Figure 3: General illustration of match feature model. In this example, both S1 and S2 have 10 gphrases, so the match feature matrix ˆF ∈Rs1×s2 has size 10 × 10. 5 Match Feature Models Let g1, . . . , gsk be an enumeration of the sk gphrases of TEXTCHUNK Sk. Let Sk ∈Rsk×d be the matrix, constructed by concatenating the four matrices of unigram, short phrase, long phrase and sentence representations shown in Figure 2 that contain the learned representations from Section 4 for these sk g-phrases; i.e., row Ski is the learned representation of gi. The basic design of a match feature model is that we produce an s1 × s2 matrix ˆF for a pair of TEXTCHUNKS S1 and S2, shown in Figure 3. ˆFi,j is a score that assesses the relationship between g-phrase gi of S1 and g-phrase gj of S2 with respect to the TEXTCHUNK relation of interest (paraphrasing, clause coherence etc). This score ˆFi,j is computed based on the vector representations S1i and S2j of the two g-phrases.1 We experiment with three different feature models to compute the match score ˆFi,j because we would like our architecture to address a wide variety of different TEXTCHUNK relations. We can model a TEXTCHUNK relation like paraphrasing as “for each meaning element in one sentence, there must be a similar meaning element in the other sentence”; thus, a good candidate for the match score ˆFi,j is simply vector similarity. In contrast, similarity is a less promising match score for clause coherence; for clause coherence, we want a score that models how good a continuation one g-phrase is for the other. These considerations motivate us to define three different match feature models that we will introduce now. The first match feature model is DIRECTSIM. 1In response to a reviewer question, recall that si is the total number of g-phrases of Si, so there is only one s1 × s2 matrix, not several on different levels of granularity. 66 Figure 4: CONCAT match feature model This model computes the match score of two gphrases as their similarity using a radial basis function kernel: ˆFi,j = exp(−||S1i −S2j||2 2β ) (3) where we set β = 2 (cf. Wu et al. (2013)). DIRECTSIM is an appropriate feature model for TEXTCHUNK relations like paraphrasing because in that case direct similarity features are helpful in assessing meaning equivalence. The second match feature model is INDIRECTSIM. Instead of computing the similarity directly as we do for DIRECTSIM, we first transform the representation of the g-phrase in one TEXTCHUNK using a transformation matrix M ∈ Rd×d, then compute the match score by inner product and sigmoid activation: ˆFi,j = σ(S1iMST 2j + b), (4) Our motivation is that for a TEXTCHUNK relation like clause coherence, the two TEXTCHUNKS need not have any direct similarity. However, if we map the representations of TEXTCHUNK S1 into an appropriate space then we can hope that similarity between these transformed representations of S1 and the representations of TEXTCHUNK S2 do yield useful features. We will see that this hope is borne out by our experiments. The third match feature model is CONCAT. This is a general model that can learn any weighted combination of the values of the two vectors: ˆFi,j = σ(wTei,j + b) (5) where ei,j ∈R2d is the concatenation of S1i and S2j. We can learn different combination weights w to solve different types of TEXTCHUNK matching. We call this match feature model CONCAT because we implement it by concatenating g-phrase vectors to form a tensor as shown in Figure 4. The match feature models implement multigranular comparability: they match all units in one TEXTCHUNK with all units in the other TEXTCHUNK. This is necessary because a general solution to matching must match a low-level unit like “reignite” to a higher-level unit like “fan the flames of” (Figure 1). Unlike (Socher et al., 2011), our model does not rely on parsing; therefore, it can more exhaustively search the hypothesis space of possible matchings: mfCNN covers a wide variety of different, possibly overlapping units, not just those of a single parse tree. 6 Dynamic 2D Pooling The match feature models generate an s1 ×s2 matrix. Since it has variable size, we apply two different dynamic 2D pooling methods, grid-based pooling and phrase-focused pooling, to transform it to a fixed size matrix. 6.1 Grid-based pooling We need to map ˆF ∈Rs1×s2 into a matrix F of fixed size s∗× s∗where s∗is a parameter. Gridbased pooling divides ˆF into s∗× s∗nonoverlapping (dynamic) pools and copies the maximum value in each dynamic pool to F. This method is similar to (Socher et al., 2011), but preserves locality better. ˆF can be split into equal regions only if both s1 and s2 are divisible by s∗. Otherwise, for s1 > s∗ and if s1 mod s∗= b, the dynamic pools in the first s∗−b splits each have  s1 s∗  rows while the remaining b splits each have  s1 s∗  + 1 rows. In Figure 5, a s1 × s2 = 4 × 5 matrix (left) is split into s∗×s∗= 3×3 dynamic pools (middle): each row is split into [1, 1, 2] and each column is split into [1, 2, 2]. If s1 < s∗, we first repeat all rows in batch style with size s1 until no fewer than s∗rows remain. Then the first s∗rows are kept and split into s∗ dynamic pools. The same principle applies to the partitioning of columns. In Figure 5 (right), the areas with dashed lines and dotted lines are repeated parts for rows and columns, respectively; each cell is its own dynamic pool. 6.2 Phrase-focused pooling In the match feature matrix ˆF ∈Rs1×s2, row i (resp. column j) contains all feature values for gphrase gi of S1 (resp. gj of S2). Phrase-focused pooling attempts to pick the largest match features 67 Figure 5: Partition methods in grid-based pooling. Original matrix with size 4 × 5 is mapped into matrix with size 3 × 3 and matrix with size 6 × 7, respectively. Each dynamic pool is distinguished by a border of empty white space around it. for a g-phrase g on the assumption that they are the best basis for assessing the relation of g with other g-phrases. To implement this, we sort the values of each row i (resp. each column j) in decreasing order giving us a matrix ˆFr ∈Rs1×s2 with sorted rows (resp. ˆFc ∈Rs1×s2 with sorted columns). Then we concatenate the columns of ˆFr (resp. the rows of ˆFc) resulting in list Fr = {fr 1, . . . , fr s1s2} (resp. Fc = {fc 1, . . . , fc s1s2}) where each fr (fc) is an element of ˆFr (ˆFc). These two lists are merged into a list F by interleaving them so that members from Fr and Fc alternate. F is then used to fill the rows of F from top to bottom with each row being filled from left to right.2 7 mfCNN: Match feature CNN The output of dynamic 2D pooling is further processed by the match feature CNN (mfCNN) as depicted in Figure 6. mfCNN extracts increasingly abstract interaction features from lower-level interaction features, using several layers of 2D wide convolution and fixed-size 2D pooling. We call the combination of a 2D wide convolution layer and a fixed-size 2D pooling layer a block, denoted by index b (b = 1, 2 . . .). In general, let tensor Tb ∈Rcb×sb×sb denote the feature maps in block b; block b has cb feature maps, each of size sb × sb (T1 = F ∈R1×s∗×s∗). Let Wb ∈Rcb+1×cb×fb×fb be the filter weights of 2D wide convolution in block b, fb×fb is then the size of sliding convolution regions. Then the convolution is performed as element-wise multiplication 2If ˆF has fewer cells than F, then we simply repeat the filling procedure to fill all cells. between Wb and Tb as follows: ˆTb+1 m,i−1,j−1 = σ( X Wb m,:,:,:Tb :,i−fb:i,j−fb:j+bb m) (6) where 0≤m<cb+1, 1 ≤i, j < sb+fb, bb ∈Rcb+1. Subsequently, fixed-size 2D pooling selects dominant features from kb × kb non-overlapping windows of ˆTb+1 to form a tensor as input of block b + 1: Tb+1 m,i,j = max(ˆTb+1 m,ikb:(i+1)kb,jkb:(j+1)kb) (7) where 0 ≤i, j < ⌊sb+fb−1 kb ⌋. Hu et al. (2014) used narrow convolution which would limit the number of blocks. 2D wide convolution in this work enables to stack multiple blocks of convolution and pooling to extract higher-level interaction features. We will study the influence of the number of blocks on performance below. For the experiments, we set s∗= 40, cb = 50, fb = 5, kb = 2 (b = 1, 2, · · ·). 8 MultiGranCNN We can now describe the overall architecture of MultiGranCNN. First, using a Siamese configuration, two copies of gpCNN, one for each of the two input TEXTCHUNKS, produce g-phrase representations on different levels of abstraction (Figure 2). Then one of the three match feature models (DIRECTSIM, CONCAT or INDIRECTSIM) produces an s1 × s2 match feature matrix, each cell of which assesses the match of a pair of gphrases from the two chunks. This match feature matrix is reduced to a fixed size matrix by dynamic 2D pooling (Section 6). As shown in Figure 6, the resulting fixed size matrix is the input for mfCNN, which extracts interaction features of 68 Figure 6: mfCNN & MLP for matching score learning. s∗= 10, fb = 5, kb = 2, cb = 4 in this example. increasing complexity from the basic interaction features computed by the match feature model. Finally, the output of the last block of mfCNN is the input to an MLP that computes the match score. MultiGranCNN bears resemblance to previous work on clause and sentence matching (e.g., Hu et al. (2014), Socher et al. (2011)), but it is more general and more flexible. It learns representations of g-phrases, i.e., representations of parts of the TEXTCHUNK at multiple granularities, not just for a single level such as the sentence as ARC-I does (Hu et al., 2014). MultiGranCNN explores the space of interactions between the two chunks more exhaustively by considering interactions between every unit in one chunk with every other unit in the other chunk, at all levels of granularity. Finally, MultiGranCNN supports a number of different match feature models; the corresponding module can be instantiated in a way that ensures that match features are best suited to support accurate decisions on the TEXTCHUNK relation task that needs to be addressed. 9 Experimental Setup and Results 9.1 Training Suppose the triple (x, y+, y−) is given and x matches y+ better than y−. Then our objective is the minimization of the following ranking loss: l(x, y+, y−) = max(0, 1 + s(x, y−) −s(x, y+)) where s(x, y) is the predicted match score for (x, y). We use stochastic gradient descent with Adagrad (Duchi et al., 2011), L2 regularization and minibatch training. We set initial learning rate to 0.05, batch size to 70, L2 weight to 5 · 10−4. Recall that we employ unsupervised pretraining of representations for g-phrases. We can either freeze these representations in subsequent supervised training; or we can fine-tune them. We study the performance of both regimes. 9.2 Clause Coherence Task As introduced by Hu et al. (2014), the clause coherence task determines for a pair (x, y) of clauses if the sentence “xy” is a coherent sentence. We construct a clause coherence dataset as follows (the set used by Hu et al. (2014) is not yet available). We consider all sentences from English Gigaword (Parker et al., 2009) that consist of two comma-separated clauses x and y, with each clause having between five and 30 words. For each y, we choose four clauses y′ ... y′′′′ randomly from the 1000 second clauses that have the highest similarity to y, where similarity is cosine similarity of TF-IDF vectors of the clauses; restricting the alternatives to similar clauses ensures that the task is hard. The clause coherence task then is to select y from the set y, y′, . . . , y′′′′ as the correct continuation of x. We create 21 million examples, each consisting of a first clause x and five second clauses. This set is divided into a training set of 19 million and development and test sets of one million each. An example from the training set is given in Figure 1. Then, we study the performance variance of different MultiGranCNN setups from three perspectives: a) layers of CNN in both unsupervised (gpCNN) and supervised (mfCNN) training phases; b) different approaches for clause relation feature modeling; c) dynamic pooling methods for generating same-sized feature matrices. Figure 7 (top table) shows that (Hu et al., 2014)’s parameters are good choices for our setup as well. We get best result when both gpCNN and mfCNN have three blocks of convolution and 69 pooling. This suggests that multiple layers of convolution succeed in extracting high-level features that are beneficial for clause coherence. Figure 7 (2nd table) shows that INDIRECTSIM and CONCAT have comparable performance and both outperform DIRECTSIM. DIRECTSIM is expected to perform poorly because the contents in the two clauses usually have little or no overlapping meaning. In contrast, we can imagine that INDIRECTSIM first transforms the first clause x into a counterpart and then matches this counterpart with the second clause y. In CONCAT, each of s1×s2 pairs of g-phrases is concatentated and supervised training can then learn an unrestricted function to assess the importance of this pair for clause coherence (cf. Eq. 5). Again, this is clearly a more promising TEXTCHUNK relation model for clause coherence than one that relies on DIRECTSIM. acc mfCNN 0 1 2 3 gpCNN 0 38.02 44.08 47.81 48.43 1 40.91 45.31 51.73 52.13 2 43.10 48.06 54.14 54.86 3 45.62 51.77 55.97 56.31 match feature model acc DIRECTSIM 25.40 INDIRECTSIM 56.31 CONCAT 56.12 freeze g-phrase represenations or not acc MultiGranCNN (freeze) 55.79 MultiGranCNN (fine-tune) 56.31 pooling method acc dynamic (Socher et al., 2011) 55.91 grid-based 56.07 phrase-focused 56.31 Figure 7: Effect on dev acc (clause coherence) of different factors: # convolution blocks, match feature model, freeze vs. fine-tune, pooling method. Figure 7 (3rd table) demonstrates that finetuning g-phrase representations gives better performance than freezing them. Also, grid-based and phrase-focused pooling outperform dynamic pooling (Socher et al., 2011) (4th table). Phrasefocused pooling performs best. Table 1 compares MultiGranCNN to ARC-I and ARC-II, the architectures proposed by Hu et al. (2014). We also test the five baseline systems from their paper: DeepMatch, WordEmbed, SENMLP, SENNA+MLP, URAE+MLP. For MultiGranCNN, we use the best dev set settings: number of convolution layers in gpCNN and mfCNN is 3; INDIRECTSIM; phrase-focused pooling. Table 1 shows that MultiGranCNN outperforms all other approaches on clause coherence test set. 9.3 Paraphrase Identification Task We evaluate paraphrase identification (PI) on the PAN corpus (http://bit.ly/mt-para, (Madnani et al., 2012)), consisting of training and test sets of 10,000 and 3000 sentence pairs, respectively. Sentences are about 40 words long on average. Since PI is a binary classification task, we replace the MLP with a logistic regression layer. As phrase-focused pooling was proven to be optimal, we directly use phrase-focused pooling in PI task without comparison, assuming that the choice of dynamic pooling is task independent. For parameter selection, we split the PAN training set into a core training set (core) of size 9000 and a development set (dev) of size 1000. We then train models on core and select parameters based on best performance on dev. The best results on dev are obtained for the following parameters: freezing g-phrase representations, DIRECTSIM, two convolution layers in gpCNN, no convolution layers in mfCNN. We use these parameter settings to train a model on the entire training set and report performance in Table 2. We compare MultiGranCNN to ARC-I/II (Hu et al., 2014), and two previous papers reporting performance on PAN. Madnani et al. (2012) used a combination of three basic MT metrics (BLEU, NIST and TER) and five complex MT metrics (TERp, METEOR, BADGER, MAXISIM, model acc Random Guess 20.00 DeepMatch 34.17 WordEmbed 38.28 SENMLP 34.57 SENNA+MLP 42.09 URAE+MLP 27.41 ARC-I 45.04 ARC-II 50.18 MultiGranCNN 56.27 Table 1: Performance on clause coherence test set. 70 SEPIA), computed on entire sentences. Bach et al. (2014) applied MT metrics to elementary discourse units. We integrate these eight MT metrics from prior work. method acc F1 ARC-I 61.4 60.3 ARC-II 64.9 63.5 basic MT metrics 88.6 87.8 + TERp 91.5 91.2 + METEOR 92.0 91.8 + Others 92.3 92.1 (Bach et al., 2014) 93.4 93.3 8MT+MultiGranCNN (fine-tune) 94.1 94.0 8MT+MultiGranCNN (freeze) 94.9 94.7 Table 2: Results on PAN. “8MT” = 8 MT metrics Table 2 shows that MultiGranCNN in combination with MT metrics obtains state-of-the-art performance on PAN. Freezing weights learned in unsupervised training (Figure 2) performs better than fine-tuning them; also, Table 3 shows that the best result is achieved if no convolution is used in mfCNN. Thus, the best configuration for paraphrase identification is to “forward” fixed-size interaction matrices as input to the logistic regression, without any intermediate convolution layers. Freezing weights learned in unsupervised training and no convolution layers in mfCNN both protect against overfitting. Complex deep neural networks are in particular danger of overfitting when training sets are small as in the case of PAN (cf. Hu et al. (2014)). In contrast, fine-tuning weights and several convolution layers were the optimal setup for clause coherence. For clause coherence, we have a much larger training set and therefore can successfully train a much larger number of parameters. Table 3 shows that CONCAT performs badly for PI while DIRECTSIM and INDIRECTSIM perform well. We can conceptualize PI as the task of determining if each meaning element in S1 has a similar meaning element in S2. The s1 × s2 DIRECTSIM feature model directly models this task and the s1×s2 INDIRECTSIM feature model also models it, but learning a transformation of g-phrase representations before applying similarity. In contrast, CONCAT can learn arbitrary relations between parts of the two sentences, a model that seems to be too unconstrained for PI if insufficient training resources are available. In contrast, for the clause coherence task, concatentation worked well and DIRECTSIM worked poorly and we provided an explanation based on the specific properties of clause coherence (see discussion of Figure 7). We conclude from these results that it is dependent on the task what the best feature model is for matching two linguistic objects. Interestingly, INDIRECTSIM performs well on both tasks. This suggests that INDIRECTSIM is a general feature model for matching, applicable to tasks with very different properties. 10 Conclusion In this paper, we present MultiGranCNN, a general deep learning architecture for classifying the relation between two TEXTCHUNKS. MultiGranCNN supports multigranular comparability of representations: shorter sequences in one TEXTCHUNK can be directly compared to longer sequences in the other TEXTCHUNK. MultiGranCNN also contains a flexible and modularized match feature component that is easily adaptable to different TEXTCHUNK relations. We demonstrated state-of-the-art performance of MultiGranCNN on paraphrase identification and clause coherence tasks. Acknowledgments Thanks to CIS members and anonymous reviewers for constructive comments. This work was supported by Baidu (through a Baidu scholarship awarded to Wenpeng Yin) and by Deutsche Forschungsgemeinschaft (grant DFG SCHU 2246/8-2, SPP 1335). F1 mfCNN 0 1 2 3 gpCNN 0 92.7 92.9 92.9 93.9 1 93.2 93.5 93.9 93.5 2 94.7 94.2 93.7 93.3 3 94.5 94.0 93.6 92.9 match feature model acc F1 DIRECTSIM 94.9 94.7 INDIRECTSIM 94.7 94.5 CONCAT 93.0 92.9 Table 3: Effect on dev F1 (PI) of different factors: # convolution blocks, match feature model. 71 References Ngo Xuan Bach, Nguyen Le Minh, and Akira Shimazu. 2014. Exploiting discourse information to identify paraphrases. Expert Systems with Applications, 41(6):2832–2841. William Blacoe and Mirella Lapata. 2012. A comparison of vector-based representations for semantic composition. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 546–556. Association for Computational Linguistics. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Antoine Bordes, Sumit Chopra, and Jason Weston. 2014a. Question answering with subgraph embeddings. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014b. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259. Antoine Bordes, Jason Weston, and Nicolas Usunier. 2014c. Open question answering with weakly supervised embedding models. Proceedings of 2014 European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases. Sumit Chopra, Raia Hadsell, and Yann LeCun. 2005. Learning a similarity metric discriminatively, with application to face verification. In Computer Vision and Pattern Recognition, 2005. CVPR 2005. IEEE Computer Society Conference on, volume 1, pages 539–546. IEEE. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of the 25th international conference on Machine learning, pages 160–167. ACM. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. The Journal of Machine Learning Research, 12:2121–2159. Jianfeng Gao, Patrick Pantel, Michael Gamon, Xiaodong He, Li Deng, and Yelong Shen. 2014. Modeling interestingness with deep neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Baotian Hu, Zhengdong Lu, Hang Li, and Qingcai Chen. 2014. Convolutional neural network architectures for matching natural language sentences. In Advances in Neural Information Processing Systems, pages 2042–2050. Mohit Iyyer, Jordan Boyd-Graber, Leonardo Claudino, Richard Socher, and Hal Daum´e III. 2014. A neural network for factoid question answering over paragraphs. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 633–644. Yangfeng Ji and Jacob Eisenstein. 2013. Discriminative improvements to distributional sentence similarity. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 891–896. Nal Kalchbrenner, Edward Grefenstette, and Phil Blunsom. 2014. A convolutional neural network for modelling sentences. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Quoc V Le and Tomas Mikolov. 2014. Distributed representations of sentences and documents. Proceedings of The 31st International Conference on Machine Learning, pages 1188–1196. Hang Li and Jun Xu. 2012. Beyond bag-of-words: machine learning for query-document matching in web search. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 1177–1177. ACM. Xutao Li, Michael K Ng, and Yunming Ye. 2012. Har: Hub, authority and relevance scores in multirelational data for query search. In Proceedings of the 12th SIAM International Conference on Data Mining, pages 141–152. SIAM. Chen Liu. 2013. Probabilistic Siamese Network for Learning Representations. Ph.D. thesis, University of Toronto. Zhengdong Lu and Hang Li. 2013. A deep architecture for matching short texts. In Advances in Neural Information Processing Systems, pages 1367–1375. Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 182–190. Association for Computational Linguistics. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems, pages 3111–3119. Andriy Mnih and Yee Whye Teh. 2012. A fast and simple algorithm for training neural probabilistic language models. In Proceedings of the 29th International Conference on Machine Learning, pages 1751–1758. 72 Robert Parker, Linguistic Data Consortium, et al. 2009. English gigaword fourth edition. Linguistic Data Consortium. Long Qiu, Min-Yen Kan, and Tat-Seng Chua. 2006. Paraphrase recognition via dissimilarity significance classification. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing, pages 18–26. Association for Computational Linguistics. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809. Pengcheng Wu, Steven CH Hoi, Hao Xia, Peilin Zhao, Dayong Wang, and Chunyan Miao. 2013. Online multimodal deep similarity learning with application to image retrieval. In Proceedings of the 21st ACM international conference on Multimedia, pages 153– 162. ACM. Min-Chul Yang, Nan Duan, Ming Zhou, and HaeChang Rim. 2014. Joint relational embeddings for knowledge-based question answering. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 645–650. Lei Yu, Karl Moritz Hermann, Phil Blunsom, and Stephen Pulman. 2014. Deep learning for answer sentence selection. NIPS deep learning workshop. 73
2015
7
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 719–729, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Automatic disambiguation of English puns Tristan Miller and Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universit¨at Darmstadt https://www.ukp.tu-darmstadt.de/ Abstract Traditional approaches to word sense disambiguation (WSD) rest on the assumption that there exists a single, unambiguous communicative intention underlying every word in a document. However, writers sometimes intend for a word to be interpreted as simultaneously carrying multiple distinct meanings. This deliberate use of lexical ambiguity—i.e., punning— is a particularly common source of humour. In this paper we describe how traditional, language-agnostic WSD approaches can be adapted to “disambiguate” puns, or rather to identify their double meanings. We evaluate several such approaches on a manually sense-annotated collection of English puns and observe performance exceeding that of some knowledge-based and supervised baselines. 1 Introduction Word sense disambiguation, or WSD, is the task of identifying a word’s meaning in context. No matter whether it is performed by a human or a machine, WSD usually rests on the assumption that there is a single unambiguous communicative intention underlying each word in the document.1 However, there exists a class of language constructs known 1Under this assumption, lexical ambiguity arises due to there being a plurality of words with the same surface form but different meanings, and the task of the interpreter is to select correctly among them. An alternative view is that each word is a single lexical entry whose specific meaning is underspecified until it is activated by the context (Ludlow, 1996). In the case of systematically polysemous terms (i.e., words that have several related senses shared in a systematic way by a group of similar words), it may not be necessary to disambiguate them at all in order to interpret the communication (Buitelaar, 2000). While there has been some research in modelling intentional lexical-semantic underspecification (Jurgens, 2014), it is intended for closely related senses such as those of systematically polysemous terms, not those of coarser-grained homonyms which are the subject of this paper. as paronomasia and syllepsis, or more generally as puns, in which homonymic (i.e., coarse-grained) lexical-semantic ambiguity is a deliberate effect of the communication act. That is, the writer intends for a certain word or other lexical item to be interpreted as simultaneously carrying two or more separate meanings, or alternatively for it to be unclear which meaning is the intended one. There are a variety of motivations writers have for employing such constructions, and in turn for why such uses are worthy of scholarly investigation. Perhaps surprisingly, this sort of intentional lexical ambiguity has attracted little attention in the fields of computational linguistics and natural language processing. What little research has been done is confined largely to computational mechanisms for pun generation (in the context of natural language generation for computational humour) and to computational analysis of phonological properties of puns. A fundamental problem which has not yet been as widely studied is the automatic detection and identification of intentional lexical ambiguity—that is, given a text, does it contain any lexical items which are used in a deliberately ambiguous manner, and if so, what are the intended meanings? We consider these to be important research questions with a number of real-world applications. For instance, puns are particularly common in advertising, where they are used not only to create humour but also to induce in the audience a valenced attitude toward the target (Valitutti et al., 2008). Recognizing instances of such lexical ambiguity and understanding their affective connotations would be of benefit to systems performing sentiment analysis on persuasive texts. Wordplay is also a perennial topic of scholarship in literary criticism and analysis. To give just one example, puns are one of the most intensively studied aspects of Shakespeare’s rhetoric, and laborious manual counts have shown their frequency in certain 719 of his plays to range from 17 to 85 instances per thousand lines (Keller, 2009). It is not hard to image how computer-assisted detection, classification, and analysis of puns could help scholars in the digital humanities. Finally, computational pun detection and understanding hold tremendous potential for machine-assisted translation. Some of the most widely disseminated and translated popular discourses—particularly television shows and movies—feature puns and other forms of wordplay as a recurrent and expected feature (Schr¨oter, 2005). These pose particular challenges for translators, who need not only to recognize and comprehend each instance of humour-provoking ambiguity, but also to select and implement an appropriate translation strategy.2 NLP systems could assist translators in flagging intentionally ambiguous words for special attention, and where they are not directly translatable (as is usually the case), the systems may be able to propose ambiguity-preserving alternatives which best match the original pun’s double meaning. In the present work, we discuss the adaptation of automatic word sense disambiguation techniques to intentionally ambiguous text and evaluate these adaptations in a controlled setting. We focus on humorous puns, as these are by far the most commonly encountered and more readily available in (and extractable from) existing text corpora. The remainder of this paper is structured as follows: In the following section we give a brief introduction to puns, WSD, and related previous work on computational detection and comprehension of humour. In §3 we describe the data set produced for our experiments. In §§4 and 5 we describe how disambiguation algorithms, evaluation metrics, and baselines from traditional WSD can be adapted to the task of pun identification, and in §6 we report and discuss the performance of our adapted systems. Finally, we conclude in §7 with a review of our research contributions and an outline of our plans for future work. 2 Background 2.1 Puns Punning is a form of wordplay where a word is used in such a way as to evoke several independent meanings simultaneously. Humorous and non2The problem is compounded in audio-visual media such as films; often one or both of the pun’s meanings appears in the visual channel, and thus cannot be freely substituted. humorous puns have been the subject of extensive study in the humanities and social sciences, which has led to insights into the nature of language-based humour and wordplay, including their role in commerce, entertainment, and health care; how they are processed in the brain; and how they vary over time and across cultures (Monnot, 1982; Culler, 1988; Lagerwerf, 2002; Bell et al., 2011; Bekinschtein et al., 2011). Study of literary puns imparts a greater understanding of the cultural or historical context in which the literature was produced, which is often necessary to properly interpret and translate it (Delabastita, 1997). Puns can be classified in various ways (Attardo, 1994), though from the point of view of our particular natural language processing application the most important distinction is between homographic and homophonic puns. A homographic pun exploits distinct meanings of the same written word, and a homophonic pun exploits distinct meanings of the same spoken word. Puns can be homographic, homophonic, both, or neither, as the following examples illustrate: (1) A lumberjack’s world revolves on its axes. (2) She fell through the window but felt no pane. (3) A political prisoner is one who stands behind her convictions. (4) The sign at the nudist camp read, “Clothed until April.” In (1), the pun on axes is homographic but not homophonic, since the two meanings (“more than one axe” and “more than one axis”) share the same spelling but have different pronunciations. In (2), the pun on pane (“sheet of glass”) is homophonic but not homographic, since the word for the secondary meaning (“feeling of injury”) is properly spelled pain but pronounced the same. The pun on convictions (“strongly held beliefs” and “findings of criminal guilt”) in (3) is both homographic and homophonic. Finally, the pun on clothed in (4) is neither homographic nor homophonic, since the word for the secondary meaning, closed, differs in both spelling and pronunciation. Such puns are commonly known as imperfect puns. Other characteristics of puns important for our work include whether they involve compounds, multiword expressions, or proper names, and whether the pun’s multiple meanings involve mul720 tiple parts of speech. We elaborate on the significance of these characteristics in the next section. 2.2 Word sense disambiguation Word sense disambiguation (WSD) is the task of determining which sense of a polysemous term is the one intended when that term is used in a given communicative act. Besides the target term itself, a WSD system generally requires two inputs: the context (i.e., the running text containing the target), and a sense inventory which specifies all possible senses of the target. Approaches to WSD can be categorized according to the type of knowledge sources used to help discriminate senses. Knowledge-based approaches restrict themselves to using pre-existing lexicalsemantic resources (LSRs), or such additional information as can be automatically extracted or mined from raw text corpora. Supervised approaches, on the other hand, use manually sense-annotated corpora as training data for a machine learning system, or as seed data for a bootstrapping process. Supervised WSD systems generally outperform their knowledge-based counterparts, though this comes at the considerable expense of having human annotators manually disambiguate hundreds or thousands of example sentences. Moreover, supervised approaches tend to be such that they can disambiguate only those words for which they have seen sufficient training examples to cover all senses. That is, most of them cannot disambiguate words which do not occur in the training data, nor can they select the correct sense of a known word if that sense was never observed in the training data. Regardless of the approach, all WSD systems work by extracting contextual information for the target word and comparing it against the sense information stored for that word. A seminal knowledge-based example is the Lesk algorithm (Lesk, 1986) which disambiguates a pair of target terms in context by comparing their respective dictionary definitions and selecting the two with the greatest number of words in common. Though simple, the Lesk algorithm performs surprisingly well, and has frequently served as the basis of more sophisticated approaches. In recent years, Lesk variants in which the contexts and definitions are supplemented with entries from a distributional thesaurus (Lin, 1998) have achieved state-of-the-art performance for knowledge-based systems on standard data sets (Miller et al., 2012; Basile et al., 2014). In traditional word sense disambiguation, the part of speech and lemma of the target word are usually known a priori, or can be determined with high accuracy using off-the-shelf natural language processing tools. The pool of candidate senses can therefore be restricted to those whose lexicalizations exactly match the target lemma and part of speech. No such help is available for puns, at least not in the general case. Take the following two examples: (5) Tom moped. (6) “I want a scooter,” Tom moped. In the first of these sentences, the word moped is unambiguously a verb with the lemma mope, and would be correctly recognized as such by any automatic lemmatizer and part-of-speech tagger. The moped of the second example is a pun, one of whose meanings is the same inflected form of the verb mope (“to sulk”) and the other of which is the noun moped (“motorized scooter”). For such cases an automated pun identifier would therefore need to account for all possible lemmas for all possible parts of speech of the target word. The situation becomes even more onerous for heterographic and imperfect puns, which may require the use of pronunciation dictionaries, and application of phonological theories of punning, in order to recover the lemmas (Hempelmann, 2003). As our research interests are in lexical semantics rather than phonology, we focus on puns which are homographic and monolexemic. This allows us to investigate the problem of pun identification in as controlled a setting as possible. 2.3 Previous work 2.3.1 Computational humour There is some previous research on computational detection and comprehension of humour, though by and large it is not concerned specifically with puns; those studies which do analyze puns tend to have a phonological or syntactic rather than semantic bent. In this subsection we briefly review some prior work which is relevant to ours. Yokogawa (2002) describes a system for detecting the presence of puns in Japanese text. However, this work is concerned only with puns which are both imperfect and ungrammatical, relying on syntactic cues rather than the lexical-semantic information we propose to use. Taylor and Mazlack (2004) 721 describe an n-gram–based approach for recognizing when imperfect puns are used for humorous effect in a certain narrow class of English knockknock jokes. Their focus on imperfect puns and their use of a fixed syntactic context makes their approach largely inapplicable to perfect puns in running text. Mihalcea and Strapparava (2005) treat humour recognition as a classification task, employing various machine learning techniques on humour-specific stylistic features such as alliteration and antonymy. Of particular interest is their follow-up analysis (Mihalcea and Strapparava, 2006), where they specifically point to their system’s failure to resolve lexical-semantic ambiguity as a stumbling block to better accuracy, and speculate that deeper semantic analysis of the text, such as via word sense disambiguation or domain disambiguation, could aid in the detection of humorous incongruity and opposition. The previous work which is perhaps most relevant to ours is that of Mihalcea et al. (2010). They build a data set consisting of 150 joke set-ups, each of which is followed by four possible “punchlines”, only one of which is actually humorous (but not necessarily due to a pun). They then compare the set-ups against the punchlines using various models of incongruity detection, including many exploiting knowledge-based semantic relatedness such as Lesk. The Lesk model had an accuracy of 56%, which is lower than that of a na¨ıve polysemy model which simply selects the punchline with the highest mean polysemy (66%) and even of a random-choice baseline (62%). However, it should be stressed here that the Lesk model did not directly account for the possibility that any given word might be ambiguous. Rather, for every word in the setup, the Lesk measure was used to select a word in the punchline such that the lexical overlap between each one of their possible definitions was maximized. The overlap scores for all word pairs were then averaged, and the punchline with the lowest average score selected as the most humorous. 2.3.2 Corpora There are a number of English-language corpora of intentional lexical ambiguity which have been used in past work, usually in linguistics or the social sciences. In their work on computer-generated humour, Lessard et al. (2002) use a corpus of 374 “Tom Swifty” puns taken from the Internet, plus a well-balanced corpus of 50 humorous and nonhumorous lexical ambiguities generated programmatically (Venour, 1999). Hong and Ong (2009) also study humour in natural language generation, using a smaller data set of 27 punning riddles derived from a mix of natural and artificial sources. In their study of wordplay in religious advertising, Bell et al. (2011) compile a corpus of 373 puns taken from church marquees and literature, and compare it against a general corpus of 1515 puns drawn from Internet websites and a specialized dictionary. Zwicky and Zwicky (1986) conduct a phonological analysis on a corpus of several thousand puns, some of which they collected themselves from advertisements and catalogues, and the remainder of which were taken from previously published collections. Two studies on cognitive strategies used by second language learners (Kaplan and Lucas, 2001; Lucas, 2004) used a data set of 58 jokes compiled from newspaper comics, 32 of which rely on lexical ambiguity. Bucaria (2004) conducts a linguistic analysis of a set of 135 humorous newspaper headlines, about half of which exploit lexical ambiguity. Such data sets—particularly the larger ones— provided us good evidence that intentionally lexical ambiguous exemplars exist in sufficient numbers to make a rigorous evaluation of our task feasible. Unfortunately, none of the above-mentioned corpora have been published in full, and moreover many of them contain (sometimes exclusively) the sort of imperfect or otherwise heterographic puns which we mean to exclude from consideration. This has motivated us to produce our own corpus of puns, the construction and analysis of which is described in the following section. 3 Data set As in traditional WSD, a prerequisite for our research is a corpus of examples, where one or more human annotators have already identified the ambiguous words and marked up their various meanings with reference to a given sense inventory. Such a corpus is sufficient for evaluating what we term pun identification or pun disambiguation—that is, identifying the senses of a term known a priori to be a pun. 3.1 Construction Though several prior studies have produced corpora of puns, none of them are systematically senseannotated. We therefore compiled our own corpus by pooling together some of the aforementioned 722 corpora, the user-submitted puns from the Pun of the Day website,3 and private collections provided to us by some professional humorists. This raw collection of 7750 one-liners was then filtered by trained human annotators to those instances meeting the following four criteria: One pun per instance: Of all the lexical units in the instance, one and only one may be a pun. (This criterion simplifies the task detecting the presence and location of puns in a text, a classification task which we intend to investigate in future work.) One content word per pun: The lexical unit that forms the pun must consist of, or contain, only a single content word (i.e., a noun, verb, adjective, or adverb), excepting adverbial particles of phrasal verbs. This criterion is important because, in our observations, it is often only one word which carries ambiguity in puns on compounds and multi-word expressions. Accepting lexical units containing more than one content word would have required our annotators to laboriously partition the pun into (possibly overlapping) sense-bearing units and to assign sense sets to each of them, inflating the complexity of the annotation task to unacceptable levels. Two meanings per pun: The pun must have exactly two distinct meanings. Though many sources state that puns have only two senses (Redfern, 1984; Attardo, 1994), our annotators identified a handful of corpus examples where the pun could plausibly be analyzed as carrying three distinct meanings. To simplify our manual annotation procedure and our evaluation metrics we excluded these rare outliers. Weak homography: The lexical units corresponding to the two distinct meanings must be spelled exactly the same way, except that particles and inflections may be disregarded. This somewhat softer definition of homography allows us to admit a good many morphologically interesting cases which were nonetheless readily recognized by our human annotators. The filtering reduced the number of instances to 1652, whose puns two human judges annotated with sense keys from WordNet 3.1 (Fellbaum, 3http://www.punoftheday.com/ 1998). Using an online annotation tool specially constructed for this study, the annotators applied two sets of sense keys to each instance, one for each of the two meanings of the pun. For cases where the distinction between WordNet’s fine-grained senses was irrelevant, the annotators had the option of labelling the meaning with more than one sense key. Annotators also had the option of marking a meaning as unassignable if WordNet had no corresponding sense key. Further details of our annotation tool and its use can be found in Miller and Turkovi´c (2015). 3.2 Analysis Our judges agreed on which word was the pun in 1634 out of 1652 cases, a raw agreement of 98.91%. For the agreed cases, we used DKPro Agreement (Meyer et al., 2014) to compute Krippendorff’s α (Krippendorff, 1980) for the sense annotations. This is a chance-correcting metric of inter-annotator agreement ranging in (−1,1], where 1 indicates perfect agreement, −1 perfect disagreement, and 0 the expected score for random labelling. Our distance metric for α is a straightforward adaptation of the MASI set comparison metric (Passonneau, 2006). Whereas standard MASI, dM(A,B), compares two annotation sets A and B, our annotations take the form of unordered pairs of sets {A1,A2} and {B1,B2}. We therefore find the mapping between elements of the two pairs that gives the lowest total distance, and halve it: dM′({A1,A2},{B1,B2}) = 1 2 min(dM(A1,B1) + dM(A2,B2),dM(A1,B2) + dM(A2,B1)). With this method we observe a Krippendorff’s α of 0.777; this is only slightly below the 0.8 threshold recommended by Krippendorff, and far higher than what has been reported in other sense annotation studies (Passonneau et al., 2006; Jurgens and Klapaftis, 2013). Where possible, we resolved sense annotation disagreements automatically by taking the intersection of corresponding sense sets. Where the annotators’ sense sets were disjoint or contradictory (including the cases where the annotators disagreed on the pun word), we had a human adjudicator attempt to resolve the disagreement in favour of one annotator or the other. This left us with 1607 instances,4 of which we retained only the 1298 that had successful (i.e., not marked as unassignable) 4Pending clearance of the distribution rights, we will make some or all of our annotated data set available on our website at https://www.ukp.tu-darmstadt.de/data/. 723 annotations for the present study. The contexts in this data set range in length from 3 to 44 words, with an average length of 11.9. The 2596 meanings carry sense key annotations corresponding to anywhere from one to seven WordNet synsets, with an average of 1.08. As expected, then, WordNet’s sense granularity proved to be somewhat finer than necessary to characterize the meanings in the data set, though only marginally so. Of the 2596 individual meanings, 1303 (50.2%) were annotated with noun senses only, 877 (33.8%) with verb senses only, 340 (13.1%) with adjective senses only, and 41 (1.6%) with adverb senses only. Only 35 individual meanings (1.3%) carry sense annotations corresponding to multiple parts of speech. However, for 297 (22.9%) of our puns, the two meanings had different parts of speech. Similarly, sense annotations for each individual meaning correspond to anywhere from one to four different lemmas, with a mean of 1.25. These observations confirm the concerns we raised in §2.2 that pun disambiguators, unlike traditional WSD systems, cannot always rely on the output of a lemmatizer or part-of-speech tagger to narrow down the list of sense candidates. 4 Pun disambiguation It has long been observed that gloss overlap–based WSD systems, such as those based on the Lesk algorithm, fail to distinguish between candidate senses when their definitions have a similar overlap with the target word’s context. In some cases this is because the overlap is negligible or nonexistent; this is known as the lexical gap problem, and various solutions to it are discussed in (inter alia) Miller et al. (2012). In other cases, the indecision arises because the definitions provided by the sense inventory are too fine-grained; this problem has been addressed, with varying degrees of success, through sense clustering or coarsening techniques (a short but reasonably comprehensive survey of which appears in Matuschek et al. (2014)). A third condition under which senses cannot be discriminated is when the target word is used in an underspecified or intentionally ambiguous manner. We hold that for this third scenario a disambiguator’s inability to discriminate senses should not be seen as a failure condition, but rather as a limitation of the WSD task as traditionally defined. By reframing the task so as to permit the assignment of multiple senses (or groups of senses), we can allow disambiguation systems to sense-annotate intentionally ambiguous constructions such as puns. Many approaches to WSD, including Lesk-like algorithms, involve computing some score for all possible senses of a target word, and then selecting the single highest-scoring one as the “correct” sense. The most straightforward modification of these techniques to pun disambiguation, then, is to have the systems select the two top-scoring senses, one for each meaning of the pun. Accordingly we applied this modification to the following knowledge-based WSD algorithms: Simplified Lesk (Kilgarriff and Rosenzweig, 2000) disambiguates a target word by examining the definitions5 for each of its candidate senses and selecting the single sense—or in our case, the two senses—which have the greatest number of words in common with the context. As we previously demonstrated that puns often transcend part of speech, our pool of candidate senses is constructed as follows: we apply a morphological analyzer to recover all possible lemmas of the target word without respect to part of speech, and for each lemma we add all its senses to the pool. Simplified extended Lesk (Ponzetto and Navigli, 2010) is similar to simplified Lesk, except that the definition for each sense is concatenated with those of neighbouring senses in WordNet’s semantic network. Simplified lexically expanded Lesk (Miller et al., 2012) is also based on simplified Lesk, with the extension that every word in the context and sense definitions is expanded with up to 100 entries from a large distributional thesaurus. The above algorithms fail to make a sense assignment when more than two senses are tied for the highest lexical overlap, or when there is a single highest-scoring sense but multiple senses are tied for the second-highest overlap. We therefore devised two pun-specific tie-breaking strategies. The first is motivated by the informal observation that, though the two meanings of a pun may have different parts of speech, at least one of the parts 5In our implementation, the sense definitions are formed by concatenating the synonyms, gloss, and example sentences provided by WordNet. 724 of speech is grammatical in the context of the sentence, and so would probably be the one assigned by a stochastic or rule-based POS tagger. Our “POS” tie-breaker therefore preferentially selects the best sense, or pair of senses, whose POS matches the one applied to the target by the Stanford POS tagger (Toutanova et al., 2003). For our second tie-breaking strategy, we posit that since humour derives from the resolution of semantic incongruity (Raskin, 1985; Attardo, 1994), puns are more likely to exploit coarse-grained homonymy than than fine-grained systematic polysemy. Thus, following Matuschek et al. (2014), we induced a clustering of WordNet senses by aligning WordNet to the more coarse-grained OmegaWiki LSR.6 Our “cluster” fallback works the same as the “POS” one, with the addition that any remaining ties among senses with the second-highest overlap are resolved by preferentially selecting those which are not in the same induced cluster as, and which in WordNet’s semantic network are at least three edges distant from, the sense with the highest overlap. 5 Evaluation 5.1 Scoring In traditional word sense disambiguation, in vitro evaluations are conducted by comparing the senses assigned by the disambiguation system to the goldstandard senses assigned by the human annotators. For the case that the system and gold-standard assignments consist of a single sense each, the exactmatch criterion is used: the system receives a score of 1 if it chose the sense specified by the gold standard, and 0 otherwise. Where the system selects a single sense for an instance for which there is more than one correct gold standard sense, the multiple tags are interpreted disjunctively—that is, the system receives a score of 1 if it chose any one of the gold-standard senses, and 0 otherwise. Overall performance is reported in terms of coverage (the number of targets for which a sense assignment was attempted), precision (the sum of scores divided by the number of attempted targets), recall (the sum of scores divided by the total number of targets in the data set), and F1 (the harmonic mean of precision and recall) (Palmer et al., 2006). The traditional approach to scoring individual targets is not usable as-is for pun disambiguation, because each pun carries two disjoint but equally valid sets of sense annotations. Instead, since our 6http://www.omegawiki.org/ systems assign exactly one sense to each of the pun’s two sense sets, we count this as a match (scoring 1) only if each chosen sense can be found in one of the gold-standard sense sets, and no two gold-standard sense sets contain the same chosen sense. (As with traditional WSD scoring, various approaches could be used to assign credit for partially correct assignments, though we leave exploration of these to future work.) 5.2 Baselines System performance in WSD is normally interpreted with reference to one or more baselines. To our knowledge, ours is the very first study of automatic pun disambiguation on any scale, so at this point there are no previous systems against which to compare our results. However, traditional WSD systems are often compared with two na¨ıve baselines (Gale et al., 1992) which can be adapted for our purposes. The first of these na¨ıve baselines is to randomly select from among the candidate senses. In traditional WSD, the score for a random disambiguator which selects a single sense for a given target t is the number of gold-standard senses divided by the number of candidate senses: score(t) = g(t)÷δ(t). In our pun disambiguation task, however, a random disambiguator must select two senses—one for each of the sense sets g1(t) and g2(t)—and these senses must be distinct. There are δ(t) 2  possible ways of selecting two unique senses, so the random score for any given instance is score(t) = g1(t)·g2(t)÷ δ(t) 2  . The second na¨ıve baseline for WSD, known as most frequent sense (MFS), is a supervised baseline, meaning that it depends on a manually senseannotated background corpus. As its name suggests, it involves always selecting from the candidates that sense which has the highest frequency in the corpus. As with our test algorithms, we adapt this technique to pun disambiguation by having it select the two most frequent senses (according to WordNet’s built-in sense frequency counts). In traditional WSD, MFS baselines are notoriously difficult to beat, even for supervised disambiguation systems, and since they rely on expensive sense-tagged data they are not normally considered a benchmark for the performance of knowledgebased disambiguators. 725 system C P R F1 SL 35.52 19.74 7.01 10.35 SEL 42.45 19.96 8.47 11.90 SLEL 98.69 13.43 13.25 13.34 SEL+POS 59.94 21.21 12.71 15.90 SEL+cluster 68.10 20.70 14.10 16.77 random 100.00 9.31 9.31 9.31 MFS 100.00 13.25 13.25 13.25 Table 1: Coverage, precision, recall, and F1 for various pun diasmbiguation algorithms. 6 Results Using the freely available DKPro WSD framework (Miller et al., 2013), we implemented our pun disambiguation algorithms, ran them on our full data set, and compared their annotations against those of our manually produced gold standard. Table 1 shows the coverage, precision, recall, and F1 for simplified Lesk (SL), simplified extended Lesk (SEL), simplified lexically expanded Lesk (SLEL), and the random and most frequent sense baselines; for SEL we also report results for each of our punspecific tie-breaking strategies. All metrics are reported as percentages, and the highest score for each metric (excluding baseline coverage, which is always 100%) is highlighted in boldface. Accuracy for the random baseline annotator was about 9%; for the MFS baseline it was just over 13%. These figures are considerably lower than what is typically seen with traditional WSD corpora, where random baselines achieve accuracies of 30 to 60%, and MFS baselines 65 to 80% (Palmer et al., 2001; Snyder and Palmer, 2004; Navigli et al., 2007). Our baselines’ low figures are the result of them having to consider senses from every possible lemmatization and part of speech of the target, and underscore the difficulty of our task. The simplest knowledge-based algorithm we tested, simplified Lesk, was over twice as accurate as the random baseline in terms of precision (19.74%), but predictably had very low coverage (35.52%), leading in turn to very low recall (7.01%). Manual examination of the unassigned instances confirmed that failure was usually due to the lack of any lexical overlap whatsoever between the context and definitions. The use of a tie-breaking strategy would not help much here, though some way of bridging the lexical gap would. This is, in fact, the strategy employed by the extended and lexically expanded variants of simplified Lesk, and we observed that both were successful to some degree. Simplified lexically expanded Lesk almost completely closed the lexical gap, with nearly complete coverage (98.69%), though this came at the expense of a large drop in precision (to 13.43%). Given the near-total coverage, use of a tiebreaking strategy here would have no appreciable effect on the accuracy. Simplified extended Lesk, on the other hand, saw significant increases in coverage, precision, and recall (to 42.45%, 19.96%, and 8.47%, respectively). Its recall is statistically indistinguishable7 from the random baseline, though spot-checks of its unassigned instances show that the problem is very frequently not the lexical gap but rather multiple senses tied for the greatest overlap with the context. We therefore tested our two pun-specific backoff strategies to break this system’s ties. Using the “POS” strategy increased coverage by 41%, relatively speaking, and gave us our highest observed precision of 21.21%. Our “cluster” strategy effected a relative increase in coverage of over 60%, and gave us the best recall (14.10%). This strategy also had the best tradeoff between precision and recall, with an F1 of 16.77%. Significance testing shows the recall scores for SLEL, SEL+POS, and SEL+cluster to be significantly better than the random baseline, and statistically indistinguishable from that of MFS. This is excellent news, especially in light of the fact that supervised approaches (even baselines like MFS) usually outperform their knowledge-based counterparts. Though the three knowledge-based systems are not statistically distinguishable from each other in terms of recall, they do show a statistically significant improvement over SL and SEL, and the two implementing pun-specific tie-breaking strategies were markedly more accurate than SLEL for those targets where they attempted an assignment. These two systems would therefore be preferable for applications where precision is more important than recall. We also examined the results of our generally best-performing system, SEL+cluster, to see whether there was any relationship with the targets’ part of speech. We filtered the results according to whether both gold-standard meanings of the pun contain senses for nouns only, verbs only, adjec7All significance statements in this section are based on McNemar’s test at a confidence level of 5%. 726 POS C P R Rrand noun 66.60 20.89 13.91 10.44 verb 65.61 14.54 9.54 5.12 adj. 68.87 39.73 27.36 16.84 adv. 100.00 75.00 75.00 46.67 pure 66.77 21.44 14.31 9.56 mult. 72.58 18.43 13.38 12.18 Table 2: Coverage, precision, and recall for SEL+cluster, and random baseline recall, according to part of speech. tives only, or adverbs only; these amounted to 539, 346, 106, and 8 instances, respectively. These results are shown in Table 2. Also shown there is a row which aggregates the 999 targets with “pure” POS, and another for the remaining 608 instances (“mult.”), where one or both of the two meanings contain senses for multiple parts of speech, or where the two meanings have different parts of speech. The last column of each row shows the recall of the random baseline for comparison. Accuracy was lowest on the verbs, which had the highest candidate polysemy (21.6) and are known to be particularly difficult to disambiguate even in traditional WSD. Still, as with all the other single parts of speech, performance of SEL+cluster exceeded the random baseline. While recall was lower on targets with mixed POS than those with pure POS, coverage was significantly higher. Normally such a disparity could be attributed to a difference in polysemy: Lesk-like systems are more likely to attempt a sense assignment for highly polysemous targets, since there is a greater likelihood of one of the candidate definitions matching the context, though the probability of the assignment being correct is reduced. In this case, however, the multi-POS targets actually had lower average polysemy than the single-POS ones (13.2 vs. 15.8). 7 Conclusion In this paper we have introduced the novel task of pun disambiguation and have proposed and evaluated several computational approaches for it. The major contributions of this work are as follows: First, we have produced a new data set consisting of manually sense-annotated homographic puns. The data set is large enough, and the manual annotations reliable enough, for a principled evaluation of automatic pun disambiguation systems. Second, we have shown how evaluation metrics, baselines, and disambiguation algorithms from traditional WSD can be adapted to the task of pun disambiguation, and we have tested these adaptations in a controlled experiment. The results show pun disambiguation to be a particularly challenging task for NLP, with baseline results far below what is commonly seen in traditional WSD. We showed that knowledge-based disambiguation algorithms na¨ıvely adapted from traditional WSD perform poorly, but that extending them with strategies that rely on pun-specific features brings about dramatic improvements in accuracy: their recall becomes comparable to that of a supervised baseline, and their precision greatly exceeds it. There are a number of avenues we intend to explore in future work. First, we would like to try adapting and evaluating some additional WSD algorithms for use with puns. Though our data set is probably too small to use with machine learning– based approaches, we are particularly interested in testing knowledge-based disambiguators which rely on measures of graph connectivity rather than gloss overlaps. Second, we would like to investigate alternative tie-breaking strategies, such as the domain similarity measures used by Mihalcea et al. (2010). Finally, whereas in this paper we have treated only the task of sense disambiguation for the case where a word is known a priori to be a pun, we are interested in exploring the requisite problem of pun detection, where the object is to determine whether or not a given context contains a pun, and more precisely whether any given word in a context is a pun. Acknowledgments The work described in this paper is supported by the Volkswagen Foundation as part of the Lichtenberg Professorship Program under grant No. I/82806. The authors thank John Black, Matthew Collins, Don Hauptman, Christian F. Hempelmann, Stan Kegel, Andrew Lamont, Beatrice Santorini, Mladen Turkovi´c, and Andreas Zimpfer for helping us build our data set. References Salvatore Attardo. 1994. Linguistic Theories of Humor. Mouton de Gruyter. Pierpaolo Basile, Annalina Caputo, and Giovanni Semeraro. 2014. An enhanced Lesk word sense disam727 biguation algorithm through a distributional semantic model. In Proceedings of the 25th International Conference on Computational Linguistics (COLING 2014), pages 1591–1600. Tristan A. Bekinschtein, Matthew H. Davis, Jennifer M. Rodd, and Adrian M. Owen. 2011. Why clowns taste funny: The relationship between humor and semantic ambiguity. The Journal of Neuroscience, 31(26):9665–9671, June. Nancy D. Bell, Scott Crossley, and Christian F. Hempelmann. 2011. Wordplay in church marquees. Humor: International Journal of Humor Research, 24(2):187– 202, April. Chiara Bucaria. 2004. Lexical and syntactic ambiguity as a source of humor: The case of newspaper headlines. Humor: International Journal of Humor Research, 17(3):279–309. Paul Buitelaar. 2000. Reducing lexical semantic complexity with systematic polysemous classes and underspecification. In Proceedings of the 2000 NAACLANLP Workshop on Syntactic and Semantic Complexity in Natural Language Processing Systems, volume 1, pages 14–19. Jonathan D. Culler, editor. 1988. On Puns: The Foundation of Letters. Basil Blackwell, Oxford. Dirk Delabastita, editor. 1997. Traductio: Essays on Punning and Translation. St. Jerome, Manchester. Christiane Fellbaum, editor. 1998. WordNet: An Electronic Lexical Database. MIT Press, Cambridge, MA. William Gale, Kenneth Ward Church, and David Yarowsky. 1992. Estimating upper and lower bounds on the performance of word-sense disambiguation programs. In Proceedings of the 30th Annual Meeting of the Association of Computational Linguistics (ACL 1992), pages 249–256. Christian F. Hempelmann. 2003. Paronomasic Puns: Target Recoverability Towards Automatic Generation. Ph.D. thesis, Purdue University. Bryan Anthony Hong and Ethel Ong. 2009. Automatically extracting word relationships as templates for pun generation. In Proceedings of the 1st Workshop on Computational Approaches to Linguistic Creativity (CALC 2009), pages 24–31, June. David Jurgens and Ioannis Klapaftis. 2013. SemEval2013 Task 13: Word sense induction for graded and non-graded senses. In Proceedings of the 7th International Workshop on Semantic Evaluation (SemEval 2013), pages 290–299, June. David Jurgens. 2014. An analysis of ambiguity in word sense annotations. In Proceedings of the 9th International Conference on Language Resources and Evaluation (LREC 2014), pages 3006–3012, May. Nora Kaplan and Teresa Lucas. 2001. Comprensi´on del humorismo en ingl´es: Estudio de las estrategias de inferencia utilizadas por estudiantes avanzados de ingl´es como lengua extranjera en la interpretaci´on de los retru´ecanos en historietas c´omicas en lengua inglesa. Anales de la Universidad Metropolitana, 1(2):245–258. Stefan Daniel Keller. 2009. The Development of Shakespeare’s Rhetoric: A Study of Nine Plays, volume 136 of Swiss Studies in English. Narr, T¨ubingen. Adam Kilgarriff and Joseph Rosenzweig. 2000. Framework and results for English SENSEVAL. Computers and the Humanities, 34:15–48. Klaus Krippendorff. 1980. Content Analysis: An Introduction to its Methodology. Sage, Beverly Hills, CA. Luuk Lagerwerf. 2002. Deliberate ambiguity in slogans: Recognition and appreciation. Document Design, 3(3):245–260. Michael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from a ice cream cone. In Proceedings of the 5th Annual International Conference of Systems Documentation (SIGDOC 1986), pages 24–26. Greg Lessard, Michael Levison, and Chris Venour. 2002. Cleverness versus funniness. In Proceedings of the 20th Twente Workshop on Language Technology, pages 137–145. Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics (ACL 1998) and the 17th International Conference on Computational Linguistics (COLING 1998), volume 2, pages 768–774. Teresa Lucas. 2004. Deciphering the Meaning of Puns in Learning English as a Second Language: A Study of Triadic Interaction. Ph.D. thesis, Florida State University. Peter J. Ludlow. 1996. Semantic Ambiguity and Underspecification (review). Computational Linguistics, 3(23):476–482. Michael Matuschek, Tristan Miller, and Iryna Gurevych. 2014. A language-independent sense clustering approach for enhanced WSD. In Proceedings of the 12th Konferenz zur Verarbeitung nat¨urlicher Sprache (KONVENS 2014), pages 11–21, October. Christian M. Meyer, Margot Mieskes, Christian Stab, and Iryna Gurevych. 2014. DKPro Agreement: An open-source Java library for measuring inter-rater agreement. In Proceedings of the 25th International Conference on Computational Linguistics (System Demonstrations) (COLING 2014), pages 105–109, August. 728 Rada Mihalcea and Carlo Strapparava. 2005. Making computers laugh: Investigations in automatic humor recognition. In Proceedings of the 11th Human Language Technology Conference and the 10th Conference on Empirical Methods in Natural Language Processing (HLT-EMNLP 2005), pages 531–538, October. Rada Mihalcea and Carlo Strapparava. 2006. Learning to laugh (automatically): Computational models for humor recognition. Computational Intelligence, 22(2):126–142. Rada Mihalcea, Carlo Strapparava, and Stephen Pulman. 2010. Computational models for incongruity detection in humour. In Proceedings of the 11th International Conference on Computational Linguistics and Intelligent Text Processing (CICLing 2010), volume 6008 of Lecture Notes in Computer Science, pages 364–374. Springer, March. Tristan Miller and Mladen Turkovi´c. 2015. Towards the automatic detection and identification of English puns. European Journal of Humour Research. To appear. Tristan Miller, Chris Biemann, Torsten Zesch, and Iryna Gurevych. 2012. Using distributional similarity for lexical expansion in knowledge-based word sense disambiguation. In Proceedings of the 24th International Conference on Computational Linguistics (COLING 2012), pages 1781–1796, December. Tristan Miller, Nicolai Erbs, Hans-Peter Zorn, Torsten Zesch, and Iryna Gurevych. 2013. DKPro WSD: A generalized UIMA-based framework for word sense disambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (System Demonstrations) (ACL 2013), pages 37–42, August. Michel Monnot. 1982. Puns in advertising: Ambiguity as verbal aggression. Maledicta, 6:7–20. Roberto Navigli, Kenneth C. Litkowski, and Orin Hargraves. 2007. SemEval-2007 Task 07: Coarsegrained English All-words Task. In Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval-2007), pages 30–35, June. Martha Palmer, Christiane Fellbaum, Scott Cotton, Lauren Delfs, and Hoa Trang Dang. 2001. English tasks: All-words and verb lexical sample. In Proceedings of Senseval-2: 2nd International Workshop on Evaluating Word Sense Disambiguation Systems, pages 21–24, July. Martha Palmer, Hwee Tou Ng, and Hoa Trang Dang. 2006. Evaluation of WSD systems. In Eneko Agirre and Philip Edmonds, editors, Word Sense Disambiguation: Algorithms and Applications, volume 33 of Text, Speech, and Language Technology. Springer. Rebecca J. Passonneau, Nizar Habash, and Owen Rambow. 2006. Inter-annotator agreement on a multilingual semantic annotation task. In Proceedings of the 5th International Conference on Language Resources and Evaluations (LREC 2006), pages 1951–1956. Rebecca J. Passonneau. 2006. Measuring agreement on set-valued items (MASI) for semantic and pragmatic annotation. In Proceedings of the 5th International Conference on Language Resources and Evaluations (LREC 2006), pages 831–836. Simone Paolo Ponzetto and Roberto Navigli. 2010. Knowledge-rich word sense disambiguation rivaling supervised systems. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL 2010), pages 1522–1531. Vitor Raskin. 1985. Semantic Mechanisms of Humor. D. Reidel, Dordrecht, the Netherlands. Walter Redfern. 1984. Puns. Basil Blackwell, Oxford. Thorsten Schr¨oter. 2005. Shun the Pun, Rescue the Rhyme? The Dubbing and Subtitling of LanguagePlay in Film. Ph.D. thesis, Karlstad University. Benjamin Snyder and Martha Palmer. 2004. The English all-words task. In Proceedings of the 3rd International Workshop on the Evaluation of Systems for the Semantic Analysis of Text (Senseval-3), pages 41–43, July. Julia M. Taylor and Lawrence J. Mazlack. 2004. Computationally recognizing wordplay in jokes. In Proceedings of the 26th Annual Conference of the Cognitive Science Society (CogSci 2004), pages 1315– 1320, August. Kristina Toutanova, Dan Klein, Christopher Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Proceedings of the 3rd Conference of the North American Chapter of the Association for Computational Linguistics and the 9th Human Language Technologies Conference (HLT-NAACL 2003), pages 252–259. Alessandro Valitutti, Carlo Strapparava, and Oliviero Stock. 2008. Textual affect sensing for computational advertising. In Proceedings of the AAAI Spring Symposium on Creative Intelligent Systems, pages 117–122, March. Chris Venour. 1999. The computational generation of a class of puns. Master’s thesis, Queen’s University, Kingston, ON. Toshihiko Yokogawa. 2002. Japanese pun analyzer using articulation similarities. In Proceedings of the 11th IEEE International Conference on Fuzzy Systems (FUZZ 2002), volume 2, pages 1114–1119, May. Arnold M. Zwicky and Elizabeth D. Zwicky. 1986. Imperfect puns, markedness, and phonological similarity: With fronds like these, who needs anemones? Folia Linguistica, 20(3–4):493–503. 729
2015
70
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 730–740, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Unsupervised Cross-Domain Word Representation Learning Danushka Bollegala Takanori Maehara Ken-ichi Kawarabayashi danushka.bollegala@ maehara.takanori@ k keniti@ liverpool.ac.uk shizuoka.ac.jp nii.ac.jp University of Liverpool Shizuoka University National Institute of Informatics JST, ERATO, Kawarabayashi Large Graph Project. Abstract Meaning of a word varies from one domain to another. Despite this important domain dependence in word semantics, existing word representation learning methods are bound to a single domain. Given a pair of source-target domains, we propose an unsupervised method for learning domain-specific word representations that accurately capture the domainspecific aspects of word semantics. First, we select a subset of frequent words that occur in both domains as pivots. Next, we optimize an objective function that enforces two constraints: (a) for both source and target domain documents, pivots that appear in a document must accurately predict the co-occurring non-pivots, and (b) word representations learnt for pivots must be similar in the two domains. Moreover, we propose a method to perform domain adaptation using the learnt word representations. Our proposed method significantly outperforms competitive baselines including the state-of-theart domain-insensitive word representations, and reports best sentiment classification accuracies for all domain-pairs in a benchmark dataset. 1 Introduction Learning semantic representations for words is a fundamental task in NLP that is required in numerous higher-level NLP applications (Collobert et al., 2011). Distributed word representations have gained much popularity lately because of their accuracy as semantic representations for words (Mikolov et al., 2013a; Pennington et al., 2014). However, the meaning of a word often varies from one domain to another. For example, the phrase lightweight is often used in a positive sentiment in the portable electronics domain because a lightweight device is easier to carry around, which is a positive attribute for a portable electronic device. However, the same phrase has a negative sentiment assocition in the movie domain because movies that do not invoke deep thoughts in viewers are considered to be lightweight (Bollegala et al., 2014). However, existing word representation learning methods are agnostic to such domain-specific semantic variations of words, and capture semantics of words only within a single domain. To overcome this problem and capture domain-specific semantic orientations of words, we propose a method that learns separate distributed representations for each domain in which a word occurs. Despite the successful applications of distributed word representation learning methods (Pennington et al., 2014; Collobert et al., 2011; Mikolov et al., 2013a) most existing approaches are limited to learning only a single representation for a given word (Reisinger and Mooney, 2010). Although there have been some work on learning multiple prototype representations (Huang et al., 2012; Neelakantan et al., 2014) for a word considering its multiple senses, such methods do not consider the semantics of the domain in which the word is being used. If we can learn separate representations for a word for each domain in which it occurs, we can use the learnt representations for domain adaptation tasks such as cross-domain sentiment classification (Bollegala et al., 2011b), cross-domain POS tagging (Schnabel and Sch¨utze, 2013), crossdomain dependency parsing (McClosky et al., 2010), and domain adaptation of relation extractors (Bollegala et al., 2013a; Bollegala et al., 2013b; Bollegala et al., 2011a; Jiang and Zhai, 2007a; Jiang and Zhai, 2007b). We introduce the cross-domain word represen730 tation learning task, where given two domains, (referred to as the source (S) and the target (T )) the goal is to learn two separate representations wS and wT for a word w respectively from the source and the target domain that capture domainspecific semantic variations of w. In this paper, we use the term domain to represent a collection of documents related to a particular topic such as user-reviews in Amazon for a product category (e.g. books, dvds, movies, etc.). However, a domain in general can be a field of study (e.g. biology, computer science, law, etc.) or even an entire source of information (e.g. twitter, blogs, news articles, etc.). In particular, we do not assume the availability of any labeled data for learning word representations. This problem setting is closely related to unsupervised domain adaptation (Blitzer et al., 2006), which has found numerous useful applications such as, sentiment classification and POS tagging. For example, in unsupervised cross-domain sentiment classification (Blitzer et al., 2006; Blitzer et al., 2007), we train a binary sentiment classifier using positive and negative labeled user reviews in the source domain, and apply the trained classifier to predict sentiment of the target domain’s user reviews. Although the distinction between the source and the target domains is not important for the word representation learning step, it is important for the domain adaptation tasks in which we subsequently evaluate the learnt word representations. Following prior work on domain adaptation (Blitzer et al., 2006), high-frequent features (unigrams/bigrams) common to both domains are referred to as domain-independent features or pivots. In contrast, we use non-pivots to refer to features that are specific to a single domain. We propose an unsupervised cross-domain word representation learning method that jointly optimizes two criteria: (a) given a document d from the source or the target domain, we must accurately predict the non-pivots that occur in d using the pivots that occur in d, and (b) the source and target domain representations we learn for pivots must be similar. The main challenge in domain adaptation is feature mismatch, where the features that we use for training a classifier in the source domain do not necessarily occur in the target domain. Consequently, prior work on domain adaptation (Blitzer et al., 2006; Pan et al., 2010) learn lower-dimensional mappings from non-pivots to pivots, thereby overcoming the feature mismatch problem. Criteria (a) ensures that word representations for domain-specific non-pivots in each domain are related to the word representations for domain-independent pivots. This relationship enables us to discover pivots that are similar to target domain-specific non-pivots, thereby overcoming the feature mismatch problem. On the other hand, criteria (b) captures the prior knowledge that high-frequent words common to two domains often represent domain-independent semantics. For example, in sentiment classification, words such as excellent or terrible would express similar sentiment about a product irrespective of the domain. However, if a pivot expresses different semantics in source and the target domains, then it will be surrounded by dissimilar sets of non-pivots, and reflected in the first criteria. Criteria (b) can also be seen as a regularization constraint imposed on word representations to prevent overfitting by reducing the number of free parameters in the model. Our contributions in this paper can be summarized as follows. • We propose a distributed word representation learning method that learns separate representations for a word for each domain in which it occurs. To the best of our knowledge, ours is the first-ever domain-sensitive distributed word representation learning method. • Given domain-specific word representations, we propose a method to learn a cross-domain sentiment classifier. Although word representation learning methods have been used for various related tasks in NLP such as similarity measurement (Mikolov et al., 2013c), POS tagging (Collobert et al., 2011), dependency parsing (Socher et al., 2011a), machine translation (Zou et al., 2013), sentiment classification (Socher et al., 2011b), and semantic role labeling (Roth and Woodsend, 2014), to the best of our knowledge, word representations methods have not yet been used for crossdomain sentiment classification. Experimental results for cross-domain sentiment classification on a benchmark dataset show that the word representations learnt using the proposed method statistically significantly outper731 form a state-of-the-art domain-insensitive word representation learning method (Pennington et al., 2014), and several competitive baselines. In particular, our proposed cross-domain word representation learning method is not specific to a particular task such as sentiment classification, and in principle, can be in applied to a wide-range of domain adaptation tasks. Despite this taskindependent nature of the proposed method, it achieves the best sentiment classification accuracies on all domain-pairs, reporting statistically comparable results to the current state-of-the-art unsupervised cross-domain sentiment classification methods (Pan et al., 2010; Blitzer et al., 2006). 2 Related Work Representing the semantics of a word using some algebraic structure such as a vector (more generally a tensor) is a common first step in many NLP tasks (Turney and Pantel, 2010). By applying algebraic operations on the word representations, we can perform numerous tasks in NLP, such as composing representations for larger textual units beyond individual words such as phrases (Mitchell and Lapata, 2008). Moreover, word representations are found to be useful for measuring semantic similarity, and for solving proportional analogies (Mikolov et al., 2013c). Two main approaches for computing word representations can be identified in prior work (Baroni et al., 2014): counting-based and prediction-based. In counting-based approaches (Baroni and Lenci, 2010), a word w is represented by a vector w that contains other words that co-occur with w in a corpus. Numerous methods for selecting co-occurrence contexts such as proximity or dependency relations have been proposed (Turney and Pantel, 2010). Despite the numerous successful applications of co-occurrence countingbased distributional word representations, their high dimensionality and sparsity are often problematic in practice. Consequently, further postprocessing steps such as dimensionality reduction, and feature selection are often required when using counting-based word representations. On the other hand, prediction-based approaches first assign each word, for example, with a ddimensional real-vector, and learn the elements of those vectors by applying them in an auxiliary task such as language modeling, where the goal is to predict the next word in a given sequence. The dimensionality d is fixed for all the words in the vocabulary, and, unlike counting-based word representations, is much smaller (e.g. d ∈[10, 1000] in practice) compared to the vocabulary size. The neural network language model (NNLM) (Bengio et al., 2003) uses a multi-layer feed-forward neural network to predict the next word in a sequence, and uses backpropagation to update the word vectors such that the prediction error is minimized. Although NNLMs learn word representations as a by-product, the main focus on language modeling is to predict the next word in a sentence given the previous words, and not learning word representations that capture semantics. Moreover, training multi-layer neural networks using large text corpora is time consuming. To overcome those limitations, methods that specifically focus on learning word representations that model word co-occurrences in large corpora have been proposed (Mikolov et al., 2013a; Mnih and Kavukcuoglu, 2013; Huang et al., 2012; Pennington et al., 2014). Unlike the NNLM, these methods use all the words in a contextual window in the prediction task. Methods that use one or no hidden layers are proposed to improve the scalability of the learning algorithms. For example, the skip-gram model (Mikolov et al., 2013b) predicts the words c that appear in the local context of a word w, whereas the continuous bag-of-words model (CBOW) predicts a word w conditioned on all the words c that appear in w’s local context (Mikolov et al., 2013a). Methods that use global co-occurrences in the entire corpus to learn word representations have shown to outperform methods that use only local cooccurrences (Huang et al., 2012; Pennington et al., 2014). Overall, prediction-based methods have shown to outperform counting-based methods (Baroni et al., 2014). Despite their impressive performance, existing methods for word representation learning do not consider the semantic variation of words across different domains. However, as described in Section 1, the meaning of a word vary from one domain to another, and must be considered. To the best of our knowledge, the only prior work studying the problem of word representation variation across domains is due to Bollegala et al. (2014). Given a source and a target domain, they first select a set of pivots using pointwise mutual information, and create two distributional representa732 tions for each pivot using their co-occurrence contexts in a particular domain. Next, a projection matrix from the source to the target domain feature spaces is learnt using partial least squares regression. Finally, the learnt projection matrix is used to find the nearest neighbors in the source domain for each target domain-specific features. However, unlike our proposed method, their method does not learn domain-specific word representations, but simply uses co-occurrence counting when creating in-domain word representations. Faralli et al. (2012) proposed a domain-driven word sense disambiguation (WSD) method where they construct glossaries for several domain using a pattern-based bootstrapping technique. This work demonstrates the importance of considering the domain specificity of word senses. However, the focus of their work is not to learn representations for words or their senses in a domain, but to construct glossaries. It would be an interesting future research direction to explore the possibility of using such domain-specific glossaries for learning domain-specific word representations. Neelakantan et al. (2014) proposed a method that jointly performs WSD and word embedding learning, thereby learning multiple embeddings per word type. In particular, the number of senses per word type is automatically estimated. However, their method is limited to a single domain, and does not consider how the representations vary across domains. On the other hand, our proposed method learns a single representation for a particular word for each domain in which it occurs. Although in this paper we focus on the monolingual setting where source and target domains belong to the same language, the related setting where learning representations for words that are translational pairs across languages has been studied (Hermann and Blunsom, 2014; Klementiev et al., 2012; Gouws et al., 2015). Such representations are particularly useful for cross-lingual information retrieval (Duc et al., 2010). It will be an interesting future research direction to extend our proposed method to learn such cross-lingual word representations. 3 Cross-Domain Representation Learning We propose a method for learning word representations that are sensitive to the semantic variations of words across domains. We call this problem cross-domain word representation learning, and provide a definition in Section 3.1. Next, in Section 3.2, given a set of pivots that occurs in both a source and a target domain, we propose a method for learning cross-domain word representations. We defer the discussion of pivot selection methods to Section 3.4. In Section 3.5, we propose a method for using the learnt word representations to train a cross-domain sentiment classifier. 3.1 Problem Definition Let us assume that we are given two sets of documents DS and DT respectively for a source (S) and a target (T ) domain. We do not consider the problem of retrieving documents for a domain, and assume such a collection of documents to be given. Then, given a particular word w, we define cross-domain representation learning as the task of learning two separate representations wS and wT capturing w’s semantics in respectively the source S and the target T domains. Unlike in domain adaptation, where there is a clear distinction between the source (i.e. the domain on which we train) vs. the target (i.e. the domain on which we test) domains, for representation learning purposes we do not make a distinction between the two domains. In the unsupervised setting of the cross-domain representation learning that we study in this paper, we do not assume the availability of labeled data for any domain for the purpose of learning word representations. As an extrinsic evaluation task, we apply the trained word representations for classifying sentiment related to user-reviews (Section 3.5). However, for this evaluation task we require sentiment-labeled user-reviews from the source domain. Decoupling of the word representation learning from any tasks in which those representations are subsequently used, simplifies the problem as well as enables us to learn task-independent word representations with potential generic applicability. Although we limit the discussion to a pair of domains for simplicity, the proposed method can be easily extended to jointly learn word representations for more than two domains. In fact, prior work on cross-domain sentiment analysis show that incorporating multiple source domains improves sentiment classification accuracy on a target domain (Bollegala et al., 2011b; Glorot et al., 2011). 733 3.2 Proposed Method To describe our proposed method, let us denote a pivot and a non-pivot feature respectively by c and w. Our proposed method does not depend on a specific pivot selection method, and can be used with all previously proposed methods for selecting pivots as explained later in Section 3.4. A pivot c is represented in the source and target domains respectively by vectors cS ∈Rn and cT ∈Rn. Likewise, a source specific non-pivot w is represented by wS in the source domain, whereas a target specific non-pivot w is represented by wT in the target domain. By definition, a non-pivot occurs only in a single domain. For notational convenience we use w to denote non-pivots in both domains when the domain is clear from the context. We use CS, WS, CT , and WT to denote the sets of word representation vectors respectively for the source pivots, source non-pivots, target pivots, and target non-pivots. Let us denote the set of documents in the source and the target domains respectively by DS and DT . Following the bag-of-features model, we assume that a document D is represented by the set of pivots and non-pivots that occur in D (w ∈d and c ∈d). We consider the co-occurrences of a pivot c and a non-pivot w within a fixedsize contextual window in a document. Following prior work on representation learning (Mikolov et al., 2013a), in our experiments, we set the window size to 10 tokens, without crossing sentence boundaries. The notation (c, w) ∈d denotes the co-occurrence of a pivot c and a non-pivot w in a document d. We learn domain-specific word representations by maximizing the prediction accuracy of the nonpivots w that occur in the local context of a pivot c. The hinge loss, L(CS, WS), associated with predicting a non-pivot w in a source document d ∈DS that co-occurs with pivots c is given by: X d∈DS X (c,w)∈d X w∗∼p(w) max  0, 1 −cS ⊤wS + cS ⊤w∗ S  (1) Here, w∗ S is the source domain representation of a non-pivot w∗that does not occur in d. The loss function given by Eq. 1 requires that a non-pivot w that co-occurs with a pivot c in the document d is assigned a higher ranking score as measured by the inner-product between cS and wS than a nonpivot w∗that does not occur in d. We randomly sample k non-pivots from the set of all source domain non-pivots that do not occur in d as w∗. Specifically, we use the marginal distribution of non-pivots p(w), estimated from the corpus counts, as the sampling distribution. We raise p(w) to the 3/4-th power as proposed by Mikolov et al. (2013a), and normalize it to unit probability mass prior to sampling k non-pivots w∗per each co-occurrence of (c, w) ∈d. Because nonoccurring non-pivots w∗are randomly sampled, prior work on noise contrastive estimation has found that it requires more negative samples than positive samples to accurately learn a prediction model (Mnih and Kavukcuoglu, 2013). We experimentally found k = 5 to be an acceptable tradeoff between the prediction accuracy and the number of training instances. Likewise, the loss function L(CT , WT ) for predicting non-pivots using pivots in the target domain is given by: X d∈DT X (c,w)∈d X w∗∼p(w) max  0, 1 −cT ⊤wT + cT ⊤w∗ T  (2) Here, w∗denotes target domain non-pivots that do not occur in d, and are randomly sampled from p(w) following the same procedure as in the source domain. The source and target loss functions given respectively by Eqs. 1 and 2 can be used on their own to independently learn source and target domain word representations. However, by definition, pivots are common to both domains. We use this property to relate the source and target word representations via a pivot-regularizer, R(CS, CT ), defined as: R(CS, CT ) = 1 2 K X i=1 ||c(i) S −c(i) T || 2 (3) Here, ||x|| represents the l2 norm of a vector x, and c(i) is the i-th pivot in a total collection of K pivots. Word representations for non-pivots in the source and target domains are linked via the pivot regularizer because, the non-pivots in each domain are predicted using the word representations for the pivots in each domain, which in turn are regularized by Eq. 3. The overall objective function, L(CS, WS, CT , WT ), we minimize is the sum1 of 1Weighting the source and target loss functions by the respective dataset sizes did not result in any significant increase in performance. We believe that this is because the benchmark dataset contains approximately equal numbers of documents for each domain. 734 the source and target loss functions, regularized via Eq. 3 with coefficient λ, and is given by: L(CS, WS, ) + L(CT , WT ) + λR(CS, CT ) (4) 3.3 Training Word representations of pivots c and non-pivots w in the source (cS, wS) and the target (cT , wT ) domains are parameters to be learnt in the proposed method. To derive parameter updates, we compute the gradients of the overall loss function in Eq. 4 w.r.t. to each parameter as follows: ∂L ∂wS = ( 0 if cS ⊤(wS −w∗ S) ≥1 −cS otherwise (5) ∂L ∂w∗ S = ( 0 if cS ⊤(wS −w∗ S) ≥1 cS otheriwse (6) ∂L ∂wT = ( 0 if cT ⊤(wT −w∗ T ) ≥1 −cT otherwise (7) ∂L ∂w∗ T = ( 0 if cT ⊤(wT −w∗ T ) ≥1 cT otherwise (8) ∂L ∂cS = ( λ(cS −cT ) if cS ⊤(wS −w∗ S) ≥1 w∗ S −wS + λ(cS −cT ) otherwise (9) ∂L ∂cT = ( λ(cT −cS) if cT ⊤(wT −w∗ T ) ≥1 w∗ T −wT + λ(cT −cS) otherwise (10) Here, for simplicity, we drop the arguments inside the loss function and write it as L. We use mini batch stochastic gradient descent with a batch size of 50 instances. AdaGrad (Duchi et al., 2011) is used to schedule the learning rate. All word representations are initialized with n dimensional random vectors sampled from a zero mean and unit variance Gaussian. Although the objective in Eq. 4 is not jointly convex in all four representations, it is convex w.r.t. the representation of a particular feature (pivot or non-pivot) when the representations for all the other features are held fixed. In our experiments, the training converged in all cases with less than 100 epochs over the dataset. The rank-based predictive hinge loss (Eq. 1) is inspired by the prior work on word representation learning for a single domain (Collobert et al., 2011). However, unlike the multilayer neural network in Collobert et al. (2011), the proposed method uses a computationally efficient single layer to reduce the number of parameters that must be learnt, thereby scaling to large datasets. Similar to the skip-gram model (Mikolov et al., 2013a), the proposed method predicts occurrences of contexts (non-pivots) w within a fixed-size contextual window of a target word (pivot) c. Scoring the co-occurrences of two words c and w by the bilinear form given by the inner-product is similar to prior work on domain-insensitive word-representation learning (Mnih and Hinton, 2008; Mikolov et al., 2013a). However, unlike those methods that use the softmax function to convert inner-products to probabilities, we directly use the inner-products without any further transformations, thereby avoiding computationally expensive distribution normalizations over the entire vocabulary. 3.4 Pivot Selection Given two sets of documents DS, DT respectively for the source and the target domains, we use the following procedure to select pivots and non-pivots. First, we tokenize and lemmatize each document using the Stanford CoreNLP toolkit2. Next, we extract unigrams and bigrams as features for representing a document. We remove features listed as stop words using a standard stop words list. Stop word removal increases the effective cooccurrence window size for a pivot. Finally, we remove features that occur less than 50 times in the entire set of documents. Several methods have been proposed in the prior work on domain adaptation for selecting a set of pivots from a given pair of domains such as the minimum frequency of occurrence of a feature in the two domains, mutual information (MI), and the entropy of the feature distribution over the documents (Pan et al., 2010). In our preliminary experiments, we discovered that a normalized version of the PMI (NPMI) (Bouma, 2009) to work consistently well for selecting pivots from different pairs of domains. NPMI between two features x and y is given by: NPMI(x, y) = log  p(x, y) p(x)p(y)  1 −log(p(x, y)) (11) Here, the joint probability p(x, y), and the marginal probabilities p(x) and p(y) are estimated using the number of co-occurrences of x and y in the sentences in the documents. Eq. 11 normalizes both the upper and lower bounds of the PMI. 2http://nlp.stanford.edu/software/ corenlp.shtml 735 We measure the appropriateness of a feature as a pivot according to the score given by: score(x) = min (NPMI(x, S), NPMI(x, T )) . (12) We rank features that are common to both domains in the descending order of their scores as given by Eq. 12, and select the top NP features as pivots. We rank features x that occur only in the source domain by NPMI(x, S), and select the top ranked NS features as source-specific non-pivots. Likewise, we rank the features x that occur only in the target domain by NPMI(x, T ), and select the top ranked NT features as target-specific non-pivots. The pivot selection criterion described here differs from that of Blitzer et al. (2006; 2007), where pivots are defined as features that behave similarly both in the source and the target domains. They compute the mutual information between a feature (i.e. unigrams or bigrams) and the sentiment labels using source domain labeled reviews. This method is useful when selecting pivots that are closely associated with positive or negative sentiment in the source domain. However, in unsupervised domain adaptation we do not have labeled data for the target domain. Therefore, the pivots selected using this approach are not guaranteed to demonstrate the same sentiment in the target domain as in the source domain. On the other hand, the pivot selection method proposed in this paper focuses on identifying a subset of features that are closely associated with both domains. It is noteworthy that our proposed cross-domain word representation learning method (Section 3.2) does not assume any specific pivot/non-pivot selection method. Therefore, in principle, our proposed word representation learning method could be used with any of the previously proposed pivot selection methods. We defer a comprehensive evaluation of possible combinations of pivot selection methods and their effect on the proposed word representation learning method to future work. 3.5 Cross-Domain Sentiment Classification As a concrete application of cross-domain word representations, we describe a method for learning a cross-domain sentiment classifier using the word representations learnt by the proposed method. Existing word representation learning methods that learn from only a single domain are typically evaluated for their accuracy in measuring semantic similarity between words, or by solving word analogy problems. Unfortunately, such gold standard datasets capturing cross-domain semantic variations of words are unavailable. Therefore, by applying the learnt word representations in a cross-domain sentiment classification task, we can conduct an indirect extrinsic evaluation. The train data available for unsupervised crossdomain sentiment classification consists of unlabeled data for both the source and the target domains as well as labeled data for the source domain. We train a binary sentiment classifier using those train data, and apply it to classify sentiment of the target test data. Unsupervised cross-domain sentiment classification is challenging due to two reasons: featuremismatch, and semantic variation. First, the sets of features that occur in source and target domain documents are different. Therefore, a sentiment classifier trained using source domain labeled data is likely to encounter unseen features during test time. We refer to this as the feature-mismatch problem. Second, some of the features that occur in both domains will have different sentiments associated with them (e.g. lightweight). Therefore, a sentiment classifier trained using source domain labeled data is likely to incorrectly predict similar sentiment (as in the source) for such features. We call this the semantic variation problem. Next, we propose a method to overcome both problems using cross-domain word representations. Let us assume that we are given a set {(x(i) S , y(i))}n i=1 of n labeled reviews x(i) S for the source domain S. For simplicity, let us consider binary sentiment classification where each review x(i) is labeled either as positive (i.e. y(i) = 1) or negative (i.e. y(i) = −1). Our cross-domain binary sentiment classification method can be easily extended to multi-class classification. First, we lemmatize each word in a source domain labeled review x(i) S , and extract unigrams and bigrams as features to represent x(i) S by a binary-valued feature vector. Next, we train a binary linear classifier, θ, using those feature vectors. Any binary classification algorithm can be used for this purpose. We use θ(z) to denote the weight learnt by the classifier for a feature z. In our experiments, we used l2 regularized logistic regression. At test time, we represent a test target review by a binary-valued vector h using a the set of unigrams and bigrams extracted from that review. Then, the activation score, ψ(h), of h is defined 736 by: ψ(h) = X c∈h X c′∈θ θ(c′)f(c′ S, cS)+ X w∈h X w′∈θ θ(w′)f(w′ S, wT ) (13) Here, f is a similarity measure between two vectors. If ψ(h) > 0, we classify h as positive, and negative otherwise. Eq. 13 measures the similarity between each feature in h against the features in the classification model θ. For pivots c ∈h, we use the the source domain representations to measure similarity, whereas for the (target-specific) non-pivots w ∈h, we use their target domain representations. We experimented with several popular similarity measures for f and found cosine similarity to perform consistently well. We can interpret Eq. 13 as a method for expanding a test target document using nearest neighbor features from the source domain labeled data. It is analogous to query expansion used in information retrieval to improve document recall (Fang, 2008). Alternatively, Eq. 13 can be seen as a linearly-weighted additive kernel function over two feature spaces. 4 Experiments and Results For train and evaluation purposes, we use the Amazon product reviews collected by Blitzer et al. (2007) for the four product categories: books (B), DVDs (D), electronic items (E), and kitchen appliances (K). There are 1000 positive and 1000 negative sentiment labeled reviews for each domain. Moreover, each domain has on average 17, 547 unlabeled reviews. We use the standard split of 800 positive and 800 negative labeled reviews from each domain as training data, and the rest (200+200) for testing. For validation purposes we use movie (source) and computer (target) domains, which were also collected by Blitzer et al. (2007), but not part of the train/test domains. Experiments conducted using this validation dataset revealed that the performance of the proposed method is relatively insensitive to the value of the regularization parameter λ ∈[10−3, 103]. For the non-pivot prediction task we generate positive and negative instances using the procedure described in Section 3.2. As a typical example, we have 88, 494 train instances from the books source domain and 141, 756 train instances from the target domain (1:5 ratio between positive and negative instances in each domain). The number of pivots and non-pivots are set to NP = NS = NT = 500. In Figure 1, we compare the proposed method against two baselines (NA, InDomain), current state-of-the-art methods for unsupervised crossdomain sentiment classification (SFA, SCL), word representation learning (GloVe), and crossdomain similarity prediction (CS). The NA (noadapt) lower baseline uses a classifier trained on source labeled data to classify target test data without any domain adaptation. The InDomain baseline is trained using the labeled data for the target domain, and simulates the performance we can expect to obtain if target domain labeled data were available. Spectral Feature Alignment (SFA) (Pan et al., 2010) and Structural Correspondence Learning (SCL) (Blitzer et al., 2007) are the state-ofthe-art methods for cross-domain sentiment classification. However, those methods do not learn word representations. We use Global Vector Prediction (GloVe) (Pennington et al., 2014), the current state-of-theart word representation learning method, to learn word representations separately from the source and target domain unlabeled data, and use the learnt representations in Eq. 13 for sentiment classification. In contrast to the joint word representations learnt by the proposed method, GloVe simulates the level of performance we would obtain by learning representations independently. CS denotes the cross-domain vector prediction method proposed by Bollegala et al. (2014). Although CS can be used to learn a vector-space translation matrix, it does not learn word representations. Vertical bars represent the classification accuracies (i.e. percentage of the correctly classified test instances) obtained by a particular method on target domain’s test data, and Clopper-Pearson 95% binomial confidence intervals are superimposed. Differences in data pre-processing (tokenization/lemmatization), selection (train/test splits), feature representation (unigram/bigram), pivot selection (MI/frequency), and the binary classification algorithms used to train the final classifier make it difficult to directly compare results published in prior work. Therefore, we re-run the original algorithms on the same processed dataset under the same conditions such that any differences reported in Figure 1 can be directly attributable to the domain adaptation, or word-representation learning methods compared. All methods use l2 regularized logistic regression as the binary sentiment classifier, and the reg737 E−>B D−>B K−>B 55 60 65 70 75 80 Accuracy B−>E D−>E K−>E 50 55 60 65 70 75 80 85 Accuracy B−>D E−>D K−>D 55 60 65 70 75 80 Accuracy NA GloVe SFA SCL CS Proposed B−>K E−>K D−>K 50 60 70 80 90 Accuracy Figure 1: Accuracies obtained by different methods for each source-target pair in cross-domain sentiment classification. ularization coefficients are set to their optimal values on the validation dataset. SFA, SCL, and CS use the same set of 500 pivots as used by the proposed method selected using NPMI (Section 3.4). Dimensionality n of the representation is set to 300 for both GloVe and the proposed method. From Fig. 1 we see that the proposed method reports the highest classification accuracies in all 12 domain pairs. Overall, the improvements of the proposed method over NA, GloVe, and CS are statistically significant, and is comparable with SFA, and SCL. The proposed method’s improvement over CS shows the importance of predicting word representations instead of counting. The improvement over GloVe shows that it is inadequate to simply apply existing word representation learning methods to learn independent word representations for the source and target domains. We must consider the correspondences between the two domains as expressed by the pivots to jointly learn word representations. As shown in Fig. 2, the proposed method reports superior accuracies over GloVe across different dimensionalities. Moreover, we see that when the dimensionality of the representations increases, initially accuracies increase in both methods and saturates after 200 −600 dimensions. However, further increasing the dimensionality results in unstable and some what poor accuracies due to overfitting when training high-dimensional representations. Although our word representations learnt by the proposed method are not specific to sentiment classification, the fact that it clearly outperforms SFA and SCL in all domain pairs is encouraging, and implies the wider-applicability of the 0 200 400 600 800 1000 Dimensions 60 62 64 66 68 70 72 74 Accuracy Proposed GloVe NA Figure 2: Accuracy vs. dimensionality of the representation. proposed method for domain adaptation tasks beyond sentiment classification. 5 Conclusion We proposed an unsupervised method for learning cross-domain word representations using a given set of pivots and non-pivots selected from a source and a target domain. Moreover, we proposed a domain adaptation method using the learnt word representations. Experimental results on a cross-domain sentiment classification task showed that the proposed method outperforms several competitive baselines and achieves best sentiment classification accuracies for all domain pairs. In future, we plan to apply the proposed method to other types of domain adaptation tasks such as cross-domain partof-speech tagging, named entity recognition, and relation extraction. Source code and pre-processed data etc. for this publication are publicly available3. 3www.csc.liv.ac.uk/˜danushka/prj/darep 738 References Marco Baroni and Alessandro Lenci. 2010. Distributional memory: A general framework for corpus-based semantics. Computational Linguistics, 36(4):673 – 721. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proc. of ACL, pages 238–247. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Jauvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137 – 1155. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proc. of EMNLP, pages 120 – 128. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, bollywood, boom-boxes and blenders: Domain adaptation for sentiment classification. In Proc. of ACL, pages 440 – 447. Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2011a. Relation adaptation: Learning to extract novel relations with minimum supervision. In Proc. of IJCAI, pages 2205 – 2210. Danushka Bollegala, David Weir, and John Carroll. 2011b. Using multiple sources to construct a sentiment sensitive thesaurus for cross-domain sentiment classification. In ACL/HLT, pages 132 – 141. Danushka Bollegala, Mitsuru Kusumoto, Yuichi Yoshida, and Ken ichi Kawarabayashi. 2013a. Mining for analogous tuples from an entity-relation graph. In Proc. of IJCAI, pages 2064 – 2070. Danushka Bollegala, Yutaka Matsuo, and Mitsuru Ishizuka. 2013b. Minimally supervised novel relation extraction using latent relational mapping. IEEE Transactions on Knowledge and Data Engineering, 25(2):419 – 432. Danushka Bollegala, David Weir, and John Carroll. 2014. Learning to predict distributions of words across domains. In Proc. of ACL, pages 613 – 623. Gerlof Bouma. 2009. Normalized (pointwsie) mutual information in collocation extraction. In Proc. of GSCL, pages 31 – 40. Ronan Collobert, Jason Weston, Leon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuska. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493 – 2537. Nguyen Tuan Duc, Danushka Bollegala, and Mitsuru Ishizuka. 2010. Using relational similarity between word pairs for latent relational search on the web. In IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, pages 196 – 199. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121 – 2159, July. Hui Fang. 2008. A re-examination of query expansion using lexical resources. In Proc. of ACL, pages 139– 147. Stefano Faralli and Roberto Navigli. 2012. A new minimally-supervised framework for domain word sense disambiguation. In EMNLP, pages 1411 – 1422. Xavier Glorot, Antoine Bordes, and Yoshua Bengio. 2011. Domain adaptation for large-scale sentiment classification: A deep learning approach. In Proc. of ICML. Stephan Gouws, Yoshua Bengio, and Greg Corrado. 2015. Bilbowa: Fast bilingual distributed representations without word alignments. In Proc. of ICML. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual distributed representations without word alignment. In Proc. of ICLR. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proc. of ACL, pages 873 – 882. Jing Jiang and ChengXiang Zhai. 2007a. Instance weighting for domain adaptation in nlp. In ACL 2007, pages 264 – 271. Jing Jiang and ChengXiang Zhai. 2007b. A two-stage approach to domain adaptation for statistical classifiers. In CIKM 2007, pages 401–410. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proc. of COLING, pages 1459 – 1474. David McClosky, Eugene Charniak, and Mark Johnson. 2010. Automatic domain adaptation for parsing. In Proc. of NAACL/HLT, pages 28 – 36. Tomas Mikolov, Kai Chen, and Jeffrey Dean. 2013a. Efficient estimation of word representation in vector space. CoRR. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. of NIPS, pages 3111 – 3119. Tomas Mikolov, Wen tau Yih, and Geoffrey Zweig. 2013c. Linguistic regularities in continous space word representations. In NAACL’13, pages 746 – 751. 739 Jeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proc. of ACLHLT, pages 236 – 244. Andriy Mnih and Geoffrey E. Hinton. 2008. A scalable hierarchical distributed language model. In Proc. of NIPS, pages 1081–1088. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Proc. of NIPS. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proc. of EMNLP, pages 1059–1069. Sinno Jialin Pan, Xiaochuan Ni, Jian-Tao Sun, Qiang Yang, and Zheng Chen. 2010. Cross-domain sentiment classification via spectral feature alignment. In Proc. of WWW, pages 751 – 760. Jeffery Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: global vectors for word representation. In Proc. of EMNLP. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proc. of HLT-NAACL, pages 109 – 117. Michael Roth and Kristian Woodsend. 2014. Composition of word representations improves semantic role labelling. In Proc. of EMNLP, pages 407–413. Tobias Schnabel and Hinrich Sch¨utze. 2013. Towards robust cross-domain domain adaptation for part-ofspeech tagging. In Proc. of IJCNLP, pages 198 – 206. Richard Socher, Cliff Chiung-Yu Lin, Andrew Ng, and Chris Manning. 2011a. Parsing natural scenes and natural language with recursive neural networks. In ICML’11. Richard Socher, Jeffrey Pennington, Eric H. Huang, Andrew Y. Ng, and Christopher D. Manning. 2011b. Semi-supervised recursive autoencoders for predicting sentiment distributions. In Proc. of EMNLP, pages 151–161. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Aritificial Intelligence Research, 37:141 – 188. Will Y. Zou, Richard Socher, Daniel Cer, and Christopher D. Manning. 2013. Bilingual word embeddings for phrase-based machine translation. In Proc. of EMNLP’13, pages 1393 – 1398. 740
2015
71
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 741–751, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Unified Multilingual Semantic Representation of Concepts Jos´e Camacho-Collados, Mohammad Taher Pilehvar and Roberto Navigli Department of Computer Science Sapienza University of Rome {collados,pilehvar,navigli}@di.uniroma1.it Abstract Semantic representation lies at the core of several applications in Natural Language Processing. However, most existing semantic representation techniques cannot be used effectively for the representation of individual word senses. We put forward a novel multilingual concept representation, called MUFFIN, which not only enables accurate representation of word senses in different languages, but also provides multiple advantages over existing approaches. MUFFIN represents a given concept in a unified semantic space irrespective of the language of interest, enabling cross-lingual comparison of different concepts. We evaluate our approach in two different evaluation benchmarks, semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several standard datasets. 1 Introduction Semantic representation, i.e., the task of representing a linguistic item (such as a word or a word sense) in a mathematical or machine-interpretable form, is a fundamental problem in Natural Language Processing (NLP). The Vector Space Model (VSM) is a prominent approach for semantic representation, with widespread popularity in numerous NLP applications. The prevailing methods for the computation of a vector space representation are based on distributional semantics (Harris, 1954). However, these approaches, whether in their conventional co-occurrence based form (Salton et al., 1975; Turney and Pantel, 2010; Landauer and Dooley, 2002), or in their newer predictive branch (Collobert and Weston, 2008; Mikolov et al., 2013; Baroni et al., 2014), suffer from a major drawback: they are unable to model individual word senses or concepts, as they conflate different meanings of a word into a single vectorial representation. This hinders the functionality of this group of vector space models in tasks such as Word Sense Disambiguation (WSD) that require the representation of individual word senses. There have been several efforts to adapt and apply distributional approaches to the representation of word senses (Pantel and Lin, 2002; Brody and Lapata, 2009; Reisinger and Mooney, 2010; Huang et al., 2012). However, none of these techniques provides representations that are already linked to a standard sense inventory, and consequently such mapping has to be carried out either manually, or with the help of sense-annotated data. Chen et al. (2014) addressed this issue and obtained vectors for individual word senses by leveraging WordNet glosses. NASARI (Camacho-Collados et al., 2015) is another approach that obtains accurate sense-specific representations by combining the complementary knowledge from WordNet and Wikipedia. Graph-based approaches have also been successfully utilized to model individual words (Hughes and Ramage, 2007; Agirre et al., 2009; Yeh et al., 2009), or concepts (Pilehvar et al., 2013; Pilehvar and Navigli, 2014), drawing on the structural properties of semantic networks. The applicability of all these techniques, however, is usually either constrained to a single language (usually English), or to a specific task. We put forward MUFFIN (Multilingual, UniFied and Flexible INterpretation), a novel method that exploits both structural knowledge derived from semantic networks and distributional statistics from text corpora, to produce effective representations of individual word senses or concepts. Our approach provides multiple advantages in comparison to the previous VSM techniques: 1. Multilingual: it enables sense representation in dozens of languages; 2. Unified: it represents a linguistic item, irrespective of its language, in a unified seman741 Figure 1: Our procedure for constructing a multilingual vector representation for a concept c. tic space having concepts as its dimensions, permitting direct comparison of different representations across languages, and hence enabling cross-lingual applications; 3. Flexible: it can be readily applied to different NLP tasks with minimal adaptation. We evaluate our semantic representation on two different tasks in lexical semantics: semantic similarity and Word Sense Disambiguation. To assess the multilingual capability of our approach, we also perform experiments on languages other than English on both tasks, and across languages for semantic similarity. We report state-of-the-art performance on multiple datasets and settings in both frameworks, which confirms the reliability and flexibility of our representations. 2 Methodology Figure 1 illustrates our procedure for constructing the vector representation of a given concept. We use BabelNet1 (version 2.5) as our main sense repository. BabelNet (Navigli and Ponzetto, 2012a) is a multilingual encyclopedic dictionary which merges WordNet with other lexical resources, such as Wikipedia and Wiktionary, thanks to its use of an automatic mapping algorithm. BabelNet extends the WordNet synset model to take into account multilinguality: a BabelNet synset contains the words that, in the various languages, express the given concept. Our approach for modeling a BabelNet synset consists of two main steps. First, for the given synset we gather contextual information from Wikipedia by exploiting knowledge from the BabelNet semantic network (Section 2.1). Then, by analyzing the corresponding contextual information and comparing and contrasting it with the 1http://www.babelnet.org whole Wikipedia corpus, we obtain a vectorial representation of the given synset (Section 2.2). 2.1 A Wikipedia sub-corpus for each concept Let c be a concept, which in our setting is a BabelNet synset, and let Wc be the set containing the Wikipedia page p corresponding to the concept c and all the Wikipedia pages having an outgoing link to p. We further enrich Wc with the corresponding Wikipedia pages of the hypernyms and hyponyms of c in the BabelNet network. Wc is the set of Wikipedia pages whose contents are exploited to build a representation for the concept c. We refer to the bag of content words in all the Wikipedia pages in Wc as the sub-corpus SCc for the concept c. 2.2 Vector construction: lexical specificity Lexical specificity (Lafon, 1980) is a statistical measure based on the hypergeometric distribution. Due to its efficiency in extracting a set of highly relevant words from a sub-corpus, the measure has recently gained popularity in different NLP applications, such as textual data analysis (Lebart et al., 1998), term extraction (Drouin, 2003), and domain-based term disambiguation (Camacho-Collados et al., 2014; Billami et al., 2014). We leverage lexical specificity to compute the weights in our vectors. In our earlier work (Camacho-Collados et al., 2015), we conducted different experiments which demonstrated the improvement that lexical specificity can provide over the popular term frequency-inverse document frequency weighting scheme (Jones, 1972, tf-idf). Lexical specificity computes the vector weights for an item, i.e., a word or a set of words, by comparing and contrasting its contextual information with a reference corpus. In our setting, we take the whole Wikipedia as our reference corpus RC (we use the October 2012 Wikipedia dump). 742 Let T and t be the respective total number of tokens in RC and SCc, while F and f denote the frequency of a given item in RC and SCc, respectively. Our goal is to compute a weight denoting the association of an item to the concept c. For notational brevity, we use the following expression to refer to positive lexical specificity: specificity(T, t, F, f) = −log10 P(X ≥f) (1) where X represents a random variable following a hypergeometric distribution of parameters F, t and T. As we are only interested in a set of items that are representative of the concept being modeled, we follow Billami et al. (2014) and only consider in our final vector the items which are relevant to SCc with a confidence higher than 99% according to the hypergeometric distribution (P(X ≥f) ≤0.01). On the basis of lexical specificity we put forward two types of representations: lexical and unified. The lexical vector representation lexc of a concept c has lemmas as its individual dimensions. To this end, we apply lexical specificity to every lemma in SCc in order to estimate the relevance of each lemma to our concept c. We use the lexical representation for the task of WSD (see Section 3.2). We describe the unified representation in the next subsection. 2.3 Unified representation Unlike the lexical version, our unified representation has concepts as individual dimensions. Algorithm 1 shows the construction process of a concept’s unified vector. The algorithm first clusters together those words that have a sense sharing the same hypernym (h in the algorithm) according to the BabelNet taxonomy (lines 2-4). Next, the specificity is computed for the set of all the hyponyms of h, even those that do not appear in the sub-corpus SCc (lines 6-14). Here, F and f denote the aggregated frequencies of all the hyponyms of h in the whole Wikipedia (i.e., reference corpus RC) and the sub-corpus SCc, respectively. Our binding of a set of sibling words into a single cluster represented by their common hypernym provides two advantages. Firstly, it transforms the representations to a unified semantic space. This space has concepts as its dimensions, enabling their comparability across languages. Secondly, the clustering can be viewed as an implicit disambiguation process, whereby a set of potentially Algorithm 1 Unified Vector Construction Input: a concept c Output: the unified vector uc where uc(h) is the dimension corresponding to concept h 1: H ←∅ 2: for each lemma l ∈SCc 3: for each hypernym h of l in BabelNet 4: H ←H ∪{h} 5: vector uc ←null vector 6: for each h ∈H 7: if ∃l1, l2 ∈SCc: l1, l2 hyponyms of h and l1 ̸= l2 then 8: F ←0 9: f ←0 10: for each hyponym hypo of h 11: for each lexicalization lex of hypo 12: F ←F + freq(lex, RC) 13: f ←f + freq(lex, SCc) 14: uc(h) ←specificity(T, t, F, f)) 15: return vector uc ambiguous words are disambiguated into their intended sense on the basis of the contextual clues of the neighbouring content words, resulting in more accurate representations of meaning. Example. Table 1 lists the top-weighted concepts, represented by their relevant lexicalizations, in the unified vectors generated for the bird and machine senses of the noun crane and for three different languages.2 A comparison of concepts across the two senses indicates the effectiveness of our representation in identifying relevant concepts in different languages, while guaranteeing a clear distinction between the two meanings. 3 Applications Thanks to their VSM nature and the senselevel functionality, our concept representations are highly flexible, allowing us to adapt and apply them to different NLP tasks with minimal adaptation. In this section we explain how we use our representations in the tasks of semantic similarity (Section 3.1) and WSD (Section 3.2). Associating concepts with words. Given that our representations are for individual word senses, a preliminary step for both tasks would be to associate the set of concepts, i.e., BabelNet synsets, Cw = {c1, ..., cn} with a given word w. In the case when w exists in the BabelNet dictionary, we obtain the set of associated senses of the word as defined in the BabelNet sense inventory. In order to enhance the coverage in the case of 2We use the sense notation of Navigli (2009): wordp n is the nth sense of the word with part of speech p. 743 Crane (bird) Crane (machine) English French German English French German shore bird1 n ‡famille des oiseaux1 n ‡vogel-familie1 n ∗lifting device1 n ∗dispositif de levage1 n ∗hebevorrichtung1 n bird1 n ∗limicole1 n ∗charadrii1 n ‡construction4 n navire1 n radfahrzeug1 n ∗wading bird1 n oiseau aquatique2 n †vogel gattung1 n platform1 n limicole1 n †lenkfahrzeug1 n oscine bird1 n toll´e2 n wirbeltiere2 n warship1 n ⋄vaisseau2 n regler3 n †bird genus1 n gallinac´e1 n fleisch1 n electric circuit1 n spationef1 n reisebus1 n ‡bird family1 n ⋄classe1 n tier um1 n ⋄vessel2 n ‡construction2 n charadrii1 n ⋄taxonomic group1 n occurence1 n reiher1 n boat1 n †v´ehicule3 n g¨uterwagen2 n Table 1: Top-weighted concepts, i.e., BabelNet synsets, for the bird and machine senses of the noun crane. We represent each synset by one of its word senses. Word senses marked with the same symbol across languages correspond to the same BabelNet synset. words that are not defined in the BabelNet dictionary, we also exploit the so-called Wikipedia piped links. A piped link is a hyperlink appearing in the body of a Wikipedia article, providing a link to another Wikipedia article. For example, the piped link [[dockside crane|Crane (machine)]] is a hyperlink that appears as dockside crane in the text, but takes the user to the Wikipedia page titled Crane (machine). These links provide Wikipedia editors with the ability to represent a Wikipedia article through a suitable lexicalization that preserves the grammatical structure, contextual coherency, and flow of the sentence. This property provides an effective means of obtaining a set of concepts for the words not covered by BabelNet. For the case of our example, the BabelNet out-ofvocabulary word w = dockside crane will have in its set of associated concepts Cw the BabelNet synset corresponding to the Wikipedia page titled Crane (machine). 3.1 Semantic Similarity Once we have the set Cw of concepts associated with each word w, we first retrieve the set of corresponding unified vector representations. We then follow Camacho-Collados et al. (2015) and use square-rooted Weighted Overlap (Pilehvar et al., 2013, WO) as our vector comparison method, a metric that has been shown to suit specificitybased vectors more than the conventional cosine. WO compares two vectors on the basis of their overlapping dimensions, which are harmonically weighted by their relative ranking: WO(v1, v2) = P q∈O rank(q, v1) + rank(q, v2) −1 P|O| i=1(2i)−1 (2) where O is the set of overlapping dimensions (i.e. concepts) between the two vectors and rank(q, vi) is the rank of dimension q in the vector vi. Finally, the similarity between two words w1 and w2 is calculated as the similarity of their closest senses, a prevailing approach in the literature (Resnik, 1995; Budanitsky and Hirst, 2006): sim(w1, w2) = max v1∈Cw1,v2∈Cw2 p WO(v1, v2) (3) where w1 and w2 can belong to different languages. This cross-lingual similarity measurement is possible thanks to the unified languageindependent space of concepts of our semantic representations. 3.2 Multilingual Word Sense Disambiguation In order to be able to apply our approach to WSD, we use the lexical vector lexc for each concept c. The reason for our choice of lexical vectors in this setting is that they enable a direct comparison of a candidate sense’s representation with the context, which is also in the same lexical form. Algorithm 2 summarizes the general framework of our approach. Given a target word w to disambiguate, our approach proceeds by the following steps: 1. Retrieve Cw, the set of associated concepts with the target word w (line 1); 2. Obtain the lexical vector lexc for each concept c ∈Cw (cf. Section 2); 3. Calculate, for each candidate concept c, a confidence score (scorec) based on the harmonic sum of the ranks of the overlapping words between its lexical vector lexc and the context of the target word (line 5 in Algorithm 2). 744 Algorithm 2 MUFFIN for WSD Input: a target word w and a document d (context of w) Output: ˆc, the intended sense of w 1: for each concept c ∈Cw 2: scorec ←0 3: for each lemma l ∈d 4: if l ∈lexc then 5: scorec ←scorec + rank(l, lexc) −1 6: ˆc ←arg max c∈Cw scorec 7: return ˆc Thanks to the use of BabelNet, our approach is applicable to arbitrary languages. For the task of WSD, we focus on two major sense inventories integrated in BabelNet: Wikipedia and WordNet. Wikipedia sense inventory. In this case, we obtain the set of candidate senses for a target word by following the procedure described in the beginning of this Section (i.e., associating concepts with words). However, we do not consider those BabelNet synsets that are not associated with Wikipedia pages. WordNet sense inventory. Similarly, when restricted to the WordNet inventory, we discard those BabelNet synsets that do not contain a WordNet synset. In this setting, we also leverage relations from WordNet’s semantic network and its disambiguated glosses3 in order to obtain a richer set of Wikipedia articles in the sub-corpus construction. The enrichment of the semantic network with the disambiguated glosses has been shown to be beneficial in various graph-based disambiguation tasks (Navigli and Velardi, 2005; Agirre and Soroa, 2009; Pilehvar et al., 2013). 4 Experiments We assess the reliability of MUFFIN in two standard evaluation benchmarks: semantic similarity (Section 4.1) and Word Sense Disambiguation (Section 4.2). 4.1 Semantic Similarity As our semantic similarity experiment we opted for word similarity, which is one of the most popular evaluation frameworks in lexical semantics. Given a pair of words, the task in word similarity is to automatically judge their semantic similarity and, ideally, this judgement should be close to that given by humans. 3http://wordnet.princeton.edu/ glosstag.shtml 4.1.1 Datasets Monolingual. We picked the RG-65 dataset (Rubenstein and Goodenough, 1965) as our monolingual word similarity dataset. The dataset comprises 65 English word pairs which have been manually annotated by several annotators according to their similarity on a scale of 0 to 4. We also perform evaluations on the French (Joubarne and Inkpen, 2011) and German (Gurevych, 2005) adaptations of this dataset. Cross-lingual. Hassan and Mihalcea (2009) developed two sets of cross-lingual datasets based on the English MC-30 (Miller and Charles, 1991) and WordSim-353 (Finkelstein et al., 2002) datasets, for four different languages: English, German, Romanian, and Arabic. However, the construction procedure they adopted, consisting of translating the pairs to other languages while preserving the original similarity scores, has led to inconsistencies in the datasets. For instance, the Spanish dataset contains the identical pair mediodiamediodia with a similarity score of 3.42 (in the scale [0,4]). Additionally, the datasets contain several orthographic errors, such as despliege and grua (instead of despliegue and gr´ua) and incorrect translations (e.g., the English noun implement translated into the Spanish verb implementar). Kennedy and Hirst (2012) proposed a more reliable procedure that leverages two existing aligned monolingual word similarity datasets for the construction of a new cross-lingual dataset. To this end, for each two word pairs a-b and a’-b’ in the two datasets, if the difference in the corresponding scores is greater than one, the pairs are discarded. Otherwise, two new pairs a-b’ and a’-b are created with a score equal to the average of the two original pairs’ scores. In the case of repeated pairs, we merge them into a single pair with a similarity equal to their average scores. Using this procedure as a basis, Kennedy and Hirst (2012) created an English-French dataset consisting of 100 pairs. We followed the same procedure and built two datasets for English-German (consisting of 125 pairs) and German-French (comprising 96 pairs) language pairs.4 4.1.2 Comparison systems Monolingual. We benchmark our system against four other approaches that exploit 4The cross-lingual datasets are available at http:// lcl.uniroma1.it/sim-datasets/. 745 English ρ r German ρ r French ρ r MUFFIN 0.83 0.84 MUFFIN 0.77 0.76 MUFFIN 0.71 0.77 SOC-PMI – 0.61 SOC-PMI – 0.27 SOC-PMI – 0.19 PMI – 0.41 PMI – 0.40 PMI – 0.34 Retrofitting 0.74 – Retrofitting 0.60 – Retrofitting 0.61 – LSA-Wiki 0.69 0.65 – – – LSA-Wiki 0.52 0.57 Wiki-wup – 0.59 Wiki-wup – 0.65 SSA 0.83 0.86 Resnik – 0.72 NASARI 0.84 0.82 Lesk hyper – 0.69 ADW 0.87 0.81 Word2Vec – 0.84 PMI-SVD – 0.74 ESA – 0.72 Table 2: Spearman (ρ) and Pearson (r) correlation performance of different systems on the English, German and French RG-65 datasets. Wikipedia as their main knowledge resource: SSA5 (Hassan and Mihalcea, 2011), ESA (Gabrilovich and Markovitch, 2007), Wiki-wup (Ponzetto and Strube, 2007), and LSA-Wiki (Granada et al., 2014). We also provide results for systems that use distributional semantics for modeling words, both the conventional co-occurrence based approach, i.e., PMI-SVD (Baroni et al., 2014), PMI and SOC-PMI (Joubarne and Inkpen, 2011), and Retrofitting (Faruqui et al., 2015), and the newer word embeddings, i.e., Word2Vec (Mikolov et al., 2013). For Word2Vec and PMISVD, we use the pre-trained models obtained by Baroni et al. (2014).6 As for WordNet-based approaches, we report results for Resnik (Resnik, 1995) and ADW (Pilehvar et al., 2013), which take advantage of its structural information, and Lesk hyper (Gurevych, 2005), which leverages definitional information in WordNet for similarity computation. Finally, we also report the performance of our earlier work NASARI (Camacho-Collados et al., 2015), which combines knowledge from WordNet and Wikipedia for the English language in its setting without the Wiktionary synonyms module. Cross-lingual. We compare the performance of our approach against the best configuration of the CL-MSR-2.0 system (Kennedy and Hirst, 2012), which exploits Pointwise Mutual Information (PMI) on a parallel corpus obtained from 5SSA involves several parameters tuned on datasets that are constructed on the basis of MC-30 and RG-65. 6We report the best configuration of the systems on the RG-65 dataset out of their 48 configurations. The corpus used to train the models contained 2.8 billion tokens, including Wikipedia (Baroni et al., 2014). the English and French versions of WordNet. Since two of our cross-lingual datasets are newlycreated, we developed three baseline systems to enable a more meaningful comparison. To this end, we first use Google Translate to translate the non-English side of the dataset to the English language. Accordingly, three state-of-the-art graphbased and corpus-based approaches were used to measure the similarity of the resulting English pairs. As English similarity measurement systems, we opted for ADW (Pilehvar et al., 2013), and the best predictive (Mikolov et al., 2013, Word2Vec) and co-occurrence (i.e., PMI-SVD) models obtained by Baroni et al. (2014).7 In our experiments we refer to these systems as pivot, since they use English as a pivot for computing semantic similarity. As a comparison, we also show results for MUFFINpivot, which is the variant of our system applied to the same automatically translated monolingual datasets. 4.1.3 Results Monolingual. We show in Table 2 the performance of different systems in terms of Spearman and Pearson correlations on the English, German, and French RG-65 datasets. On the German and French datasets, our system outperforms the comparison systems according to both evaluation measures. It achieves considerable Spearman and Pearson correlation leads of 0.1 and 0.2, respectively, on the French dataset in comparison to the best system. Also on the English RG-65 dataset, our system attains competitive performance according to both Spearman and Pearson correla7http://clic.cimec.unitn.it/composes/ semantic-vectors.html 746 Measure FR-EN EN-DE DE-FR MUFFIN 0.83 0.76 0.83 MUFFINpivot 0.83 0.73 0.79 ADWpivot 0.80 0.73 0.72 Word2Vecpivot 0.75 0.69 0.77 PMI-SVDpivot 0.76 0.72 0.65 CL-MSR-2.0 0.30 – – Table 3: Pearson correlation performance of different similarity measures on the three crosslingual RG-65 datasets. tions. We note that most state-of-the-art systems on the dataset (e.g., ADW) are restricted to the English language only. Cross-lingual. Pearson correlation results on the three cross-lingual RG-65 datasets are presented in Table 3. Similarly to the monolingual experiments, our system proves highly reliable in the cross-lingual setting, improving the performance of the comparison systems on all three language pairs. Moreover, MUFFINpivot attains the best results among the pivot systems on all datasets, confirming the reliability of our system in the monolingual setting. We note that since the cross-lingual datasets were built by translating the word pairs in the original English RG-65 dataset, the pivot-based comparison systems proved to be highly competitive, outperforming the CL-MSR2.0 system by a considerable margin. 4.2 Word Sense Disambiguation 4.2.1 Wikipedia In this setting, we selected the SemEval 2013 allwords WSD task (Navigli et al., 2013) as our evaluation benchmark. The task provides datasets for five different languages: Italian, English, French, Spanish and German. There are on average 1123 words to disambiguate in each language’s dataset. As comparison system, we provide results for the best-performing participating system on each language. We also show results for the state-of-theart WSD system of Moro et al. (2014, Babelfy), which relies on random walks on the BabelNet semantic network and a set of graph heuristic algorithms. Finally, we also report results for the Most Frequent Sense (MFS) baseline provided by the task organizers. We follow Moro et al. (2014) and back off to the MFS baseline in the case when our system’s judgement does not meet a threshold θ. Similarly to Babelfy, we tuned the value of the threshold θ on the trial dataset provided by the organizers of the task. We tuned θ with step size 0.05 (hence, 21 possible values in [0,1]), obtaining an optimal value of 0.85 in the trial set, a value which we use across all languages. Table 4 lists the F1 percentage performance of different systems on the five datasets of the SemEval-2013 all-words WSD task. Despite not being tuned to the task, our representations provide competitive results on all datasets, outperforming the sophisticated Babelfy system on the Spanish and German languages. The variant of our system not utilizing the MFS information in the disambiguation process (θ = 0), i.e., MUFFIN⋆, also shows competitive results, outperforming the best system in the SemEval-2013 dataset on all languages. Interestingly, MUFFIN⋆proves highly effective on the French language, surpassing not only the performance of our system using the MFS information, but also attaining the best overall performance. 4.2.2 WordNet As regards the WordNet disambiguation task, we take as our benchmark the two recent SemEval English all-words WSD tasks: the SemEval-2013 task on Multilingual WSD (Navigli et al., 2013) and the SemEval-2007 English Lexical Sample, SRL and All-Words task (Pradhan et al., 2007). The all-words datasets of the two tasks contain 1644 instances (SemEval-2013) and 162 noun instances (SemEval-2007), respectively. As comparison system, we report the performance of the best configuration of the topperforming system in the SemEval-2013 task, i.e., UMCC-DLSI (Guti´errez et al., 2013). We also show results for the state-of-the-art supervised system (Zhong and Ng, 2010, IMS), as well as for two graph-based approaches that are based on random walks on the WordNet graph (Agirre and Soroa, 2009, UKB w2w) and the BabelNet semantic network (Moro et al., 2014, Babelfy). We follow Babelfy and also exploit the WordNet’s sense frequency information from the SemCor senseannotated corpus (Miller et al., 1993). However, instead of simply backing off to the most frequent sense, we propose a more meaningful exploitation of this information. To this end, we compute the relevance of a specific sense as the average of its normalized sense frequency and its corresponding 747 System MFS Back off Italian English French Spanish German MUFFIN ✓ 81.9 84.5 71.4 85.1 83.1 MUFFIN⋆ 67.9 73.5 72.3 81.1 76.1 Babelfy ✓ 84.3 87.4 71.6 83.8 81.6 Best SemEval 2013 system ✓ 58.3 54.8 60.5 58.1 61.0 MFS 82.2 80.2 69.1 82.1 83.0 Table 4: F1 percentage performance on the SemEval-2013 Multilingual WSD datasets using Wikipedia as sense inventory. score (scorec in Algorithm 2) given by our system. The sense with the highest overall relevance value is then picked as the intended sense. Additionally, we put forward a hybrid system that combines our system with IMS, hence benefiting from the judgements made by two systems that utilize complementary information. Our system makes judgements based on global contexts, whereas IMS exploits the local context of the target word. To this end, we compute the relevance of a specific sense as the average of the normalized scores given by IMS and our system (scorec in Algorithm 2). We refer to this hybrid system as MUFFIN+IMS. Table 5 reports the F1 percentage performance of different systems on the datasets of SemEval2013 and SemEval-2007 English all-words WSD tasks. We also report the results for the MFS baseline, which always picks the most frequent sense of a word. Similarly to the disambiguation task on the Wikipedia sense inventory, MUFFIN proves to be quite competitive on the WordNet disambiguation task, while surpassing the performance of all the comparison systems on the SemEval2013 dataset. On the SemEval-2007 dataset, IMS achieves the best performance, thanks to its usage of large amounts of manually and semiautomatically tagged data. Finally, our hybrid system, MUFFIN+IMS, provides the best overall performance on the two datasets, showing that our combination of the two WSD systems that utilize different types of knowledge was beneficial. 5 Related work We briefly review the recent literature on the two NLP tasks to which we applied our representations, i.e., Word Sense Disambiguation and semantic similarity. WSD. There are two main categories of WSD techniques: knowledge-based and supervised System SemEval-2013 SemEval-2007 MUFFIN 66.0 66.0 UKB 61.3 56.0 UMCC-DLSI 64.7 – IMS 65.3 67.3 Babelfy 65.9 62.7 MFS 63.2 65.8 MUFFIN+IMS 66.9 68.5 Table 5: F1 percentage performance on the SemEval-2013 and SemEval-2007 (noun instances) English All-words WSD datatets using WordNet as sense inventory. (Navigli, 2009). Supervised systems such as IMS (Zhong and Ng, 2010) analyze sense-annotated data and model the context in which the various senses of a word usually appear. Despite their accuracy for the words that are provided with suitable amounts of sense-annotated data, their applicability is limited to those words and languages for which such data is available, practically limiting them to a small subset of words mainly in the English language. Knowledge-based approaches (Sinha and Mihalcea, 2007; Navigli and Lapata, 2007; Agirre and Soroa, 2009) significantly improve the coverage of supervised systems. However, similarly to their supervised counterparts, knowledge-based techniques are usually limited to the English language. Recent years have seen a growing interest in cross-lingual and multilingual WSD (Lefever and Hoste, 2010; Lefever and Hoste, 2013; Navigli et al., 2013). Multilinguality is usually offered by methods that exploit the structural information of large-scale multilingual lexical resources such as Wikipedia (Guti´errez et al., 2013; Manion and Sainudiin, 2013; Hovy et al., 2013). Babelfy (Moro et al., 2014) is an approach with state-ofthe-art performance that relies on random walks 748 on BabelNet multilingual semantic network (Navigli and Ponzetto, 2012a) and densest subgraph heuristics. However, the approach is limited to the WSD and Entity Linking tasks. In contrast, our approach is global as it can be used in different NLP tasks, including WSD. Semantic similarity. Semantic similarity of word pairs is usually computed either on the basis of the structural properties of lexical databases and thesauri, or by comparing vectorial representations of words learned from massive text corpora. Structural approaches usually measure the similarity on the basis of the distance information on semantic networks, such as WordNet (Budanitsky and Hirst, 2006), or thesauri, such as Roget’s (Morris and Hirst, 1991; Jarmasz and Szpakowicz, 2003). The semantic network of WordNet has also been used in more sophisticated techniques such as those based on random graph walks (Ramage et al., 2009; Pilehvar et al., 2013), or coupled with the complementary knowledge from Wikipedia (Camacho-Collados et al., 2015). However, these techniques are either limited in the languages to which they can be applied, or in their applicability to tasks other than semantic similarity (Navigli and Ponzetto, 2012b). Corpus-based techniques are more flexible, enabling the training of models on corpora other than English. However, these approaches, either in their conventional co-occurrence based form (Gabrilovich and Markovitch, 2007; Landauer and Dumais, 1997; Turney and Pantel, 2010; Bullinaria and Levy, 2012), or the more recent predictive models (Mikolov et al., 2013; Collobert and Weston, 2008; Pennington et al., 2014), are restricted in two ways: (1) they cannot be used to compare word senses; and (2) they cannot be directly applied to cross-lingual semantic similarity. Though the first problem has been solved by multi-prototype models (Huang et al., 2012), or by the sense-specific representations obtained as a result of exploiting WordNet glosses (Chen et al., 2014), the second problem remains unaddressed. In contrast, our approach models word senses and concepts effectively, while providing a unified representation for different languages that enables cross-lingual semantic similarity. 6 Conclusions This paper presented MUFFIN, a new multilingual, unified and flexible representation of individual word senses. Thanks to its effective combination of distributional statistics and structured knowledge, the approach can compute efficient representations of arbitrary word senses, with high coverage and irrespective of their language. We evaluated our representations on two different NLP tasks, i.e., semantic similarity and Word Sense Disambiguation, reporting state-of-the-art performance on several datasets. Experimental results demonstrated the reliability of our unified representation approach, while at the same time also highlighting its main advantages: multilinguality, owing to its effective application within and across multiple languages; and flexibility, owing to its robust performance on two different tasks. Acknowledgments The authors gratefully acknowledge the support of the ERC Starting Grant MultiJEDI No. 259234. References Eneko Agirre and Aitor Soroa. 2009. Personalizing PageRank for Word Sense Disambiguation. In Proceedings of EACL, pages 33–41. Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and WordNet-based approaches. In Proceedings of NAACL, pages 19–27. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! a systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of ACL, pages 238–247. Mokhtar-Boumeyden Billami, Jos´e CamachoCollados, Evelyne Jacquey, and Laurence Kister. 2014. Semantic annotation and terminology validation in full scientific articles in social sciences and humanities (annotation s´emantique et validation terminologique en texte int´egral en shs) [in french]. In Proceedings of TALN 2014, pages 363–376. Samuel Brody and Mirella Lapata. 2009. Bayesian Word Sense Induction. In Proceedings of EACL, pages 103–111. Alexander Budanitsky and Graeme Hirst. 2006. Evaluating WordNet-based measures of Lexical Semantic Relatedness. Computational Linguistics, 32(1):13–47. John A. Bullinaria and Joseph P. Levy. 2012. Extracting semantic representations from word co749 occurrence statistics: stop-lists, stemming, and SVD. Behavior Research Methods, 44(3):890–907. Jos´e Camacho-Collados, Mokhtar Billami, Evelyne Jacquey, and Laurence Kister. 2014. Approche statistique pour le filtrage terminologique des occurrences de candidats termes en texte int´egral. In JADT, pages 121–133. Jos´e Camacho-Collados, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. NASARI: a Novel Approach to a Semantically-Aware Representation of Items. In Proceedings of NAACL, pages 567–577. Xinxiong Chen, Zhiyuan Liu, and Maosong Sun. 2014. A unified model for word sense representation and disambiguation. In Proceedings of EMNLP, pages 1025–1035. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: Deep neural networks with multitask learning. In Proceedings of ICML, pages 160–167. Patrick Drouin. 2003. Term extraction using nontechnical corpora as a point of leverage. Terminology, 9(1):99–115. Manaal Faruqui, Jesse Dodge, Sujay K. Jauhar, Chris Dyer, Eduard Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of NAACL, pages 1606–1615. Lev Finkelstein, Gabrilovich Evgeniy, Matias Yossi, Rivlin Ehud, Solan Zach, Wolfman Gadi, and Ruppin Eytan. 2002. Placing search in context: The concept revisited. ACM Transactions on Information Systems, 20(1):116–131. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using Wikipediabased explicit semantic analysis. In Proceedings of IJCAI, pages 1606–1611. Roger Granada, Cassia Trojahn, and Renata Vieira. 2014. Comparing semantic relatedness between word pairs in Portuguese using Wikipedia. In Computational Processing of the Portuguese Language, pages 170–175. Iryna Gurevych. 2005. Using the structure of a conceptual network in computing semantic relatedness. In Proceedings of IJCNLP, pages 767–778. Yoan Guti´errez, Yenier Casta˜neda, Andy Gonz´alez, Rainel Estrada, D. Dennys Piug, I. Jose Abreu, Roger P´erez, Antonio Fern´andez Orqu´ın, Andr´es Montoyo, Rafael Mu˜noz, and Franc Camara. 2013. UMCC DLSI: Reinforcing a ranking algorithm with sense frequencies and multidimensional semantic resources to solve multilingual word sense disambiguation. In Proceedings of SemEval 2013, pages 241–249. Zellig Harris. 1954. Distributional structure. Word, 10:146–162. Samer Hassan and Rada Mihalcea. 2009. Crosslingual semantic relatedness using encyclopedic knowledge. In Proceedings of EMNLP, pages 1192– 1201. Samer Hassan and Rada Mihalcea. 2011. Semantic relatedness using salient semantic analysis. In Proceedings of AAAI, pages 884,889. Eduard H. Hovy, Roberto Navigli, and Simone Paolo Ponzetto. 2013. Collaboratively built semistructured content and Artificial Intelligence: The story so far. Artificial Intelligence, 194:2–27. Eric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of ACL, pages 873–882. Thad Hughes and Daniel Ramage. 2007. Lexical semantic relatedness with random graph walks. In Proceedings of EMNLP-CoNLL, pages 581–589. Mario Jarmasz and Stan Szpakowicz. 2003. Roget’s thesaurus and semantic similarity. In Proceedings of RANLP, pages 212–219. Karen Sp¨arck Jones. 1972. A statistical interpretation of term specificity and its application in retrieval. Journal of Documentation, 28:11–21. Colette Joubarne and Diana Inkpen. 2011. Comparison of semantic similarity for different languages using the Google n-gram corpus and second-order co-occurrence measures. In Advances in Artificial Intelligence, pages 216–221. Alistair Kennedy and Graeme Hirst. 2012. Measuring semantic relatedness across languages. In Proceedings of xLiTe: Cross-Lingual Technologies Workshop at the Neural Information Processing Systems Conference. Pierre Lafon. 1980. Sur la variabilit´e de la fr´equence des formes dans un corpus. Mots, 1:127–165. Tom Landauer and Scott Dooley. 2002. Latent semantic analysis: theory, method and application. In Proceedings of CSCL, pages 742–743. Thomas K Landauer and Susan T Dumais. 1997. A solution to Plato’s problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. Psychological Review, 104(2):211. Ludovic Lebart, A Salem, and Lisette Berry. 1998. Exploring textual data. Kluwer Academic Publishers. Els Lefever and Veronique Hoste. 2010. SemEval2010 Task 3: Cross-lingual Word Sense Disambiguation. In Proceedings of SemEval 2010, pages 82–87, Uppsala, Sweden. Els Lefever and Veronique Hoste. 2013. SemEval2013 Task 10: Cross-lingual Word Sense Disambiguation. In Proceedings of SemEval 2013, pages 158–166, Atlanta, USA. 750 Steve L. Manion and Raazesh Sainudiin. 2013. Daebak!: Peripheral diversity for multilingual Word Sense Disambiguation. In Proceedings of SemEval 2013, pages 250–254. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. George A. Miller and Walter G. Charles. 1991. Contextual correlates of semantic similarity. Language and Cognitive Processes, 6(1):1–28. George A. Miller, Claudia Leacock, Randee Tengi, and Ross Bunker. 1993. A semantic concordance. In Proceedings of the 3rd DARPA Workshop on Human Language Technology, pages 303–308, Plainsboro, N.J. Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity Linking meets Word Sense Disambiguation: a Unified Approach. Transactions of the Association for Computational Linguistics (TACL), 2:231–244. Jane Morris and Graeme Hirst. 1991. Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Computational Linguistics, 17(1). Roberto Navigli and Mirella Lapata. 2007. Graph connectivity measures for unsupervised Word Sense Disambiguation. In Proceedings of IJCAI, pages 1683–1688. Roberto Navigli and Simone Paolo Ponzetto. 2012a. BabelNet: The automatic construction, evaluation and application of a wide-coverage multilingual semantic network. Artificial Intelligence, 193:217– 250. Roberto Navigli and Simone Paolo Ponzetto. 2012b. BabelRelate! a joint multilingual approach to computing semantic relatedness. In Proceedings of AAAI, pages 108–114. Roberto Navigli and Paola Velardi. 2005. Structural Semantic Interconnections: a knowledge-based approach to Word Sense Disambiguation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 27(7):1075–1088. Roberto Navigli, David Jurgens, and Daniele Vannella. 2013. SemEval-2013 Task 12: Multilingual Word Sense Disambiguation. In Proceedings of SemEval 2013, pages 222–231. Roberto Navigli. 2009. Word Sense Disambiguation: A survey. ACM Computing Surveys, 41(2):1–69. Patrick Pantel and Dekang Lin. 2002. Discovering word senses from text. In Proceedings of KDD, pages 613–619. Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of EMNLP, pages 1532–1543. Mohammad Taher Pilehvar and Roberto Navigli. 2014. A robust approach to aligning heterogeneous lexical resources. In Proceedings of ACL, pages 468–478. Mohammad Taher Pilehvar, David Jurgens, and Roberto Navigli. 2013. Align, Disambiguate and Walk: a Unified Approach for Measuring Semantic Similarity. In Proceedings of ACL, pages 1341– 1351. Simone Paolo Ponzetto and Michael Strube. 2007. Knowledge derived from Wikipedia for computing semantic relatedness. Journal of Artificial Intelligence Research (JAIR), 30:181–212. Sameer Pradhan, Edward Loper, Dmitriy Dligach, and Martha Palmer. 2007. SemEval-2007 task-17: English lexical sample, SRL and all words. In Proceedings of SemEval, pages 87–92. Daniel Ramage, Anna N. Rafferty, and Christopher D. Manning. 2009. Random walks for text semantic similarity. In Proceedings of the 2009 Workshop on Graph-based Methods for Natural Language Processing, pages 23–31. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Proceedings of ACL, pages 109–117. Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of IJCAI, pages 448–453. Herbert Rubenstein and John B. Goodenough. 1965. Contextual correlates of synonymy. Communications of the ACM, 8(10):627–633. Gerard Salton, A. Wong, and C. S. Yang. 1975. A vector space model for automatic indexing. Communications of the ACM, 18(11):613–620. Ravi Sinha and Rada Mihalcea. 2007. Unsupervised graph-based Word Sense Disambiguation using measures of word semantic similarity. In Proceedings of ICSC, pages 363–369. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. Eric Yeh, Daniel Ramage, Christopher D. Manning, Eneko Agirre, and Aitor Soroa. 2009. WikiWalk: random walks on Wikipedia for semantic relatedness. In Proceedings of the Workshop on Graphbased Methods for Natural Language Processing, pages 41–49. Zhi Zhong and Hwee Tou Ng. 2010. It Makes Sense: A wide-coverage Word Sense Disambiguation system for free text. In Proceedings of the ACL System Demonstrations, pages 78–83. 751
2015
72
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 752–762, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Demographic Factors Improve Classification Performance Dirk Hovy Center for Language Technology University of Copenhagen, Denmark Njalsgade 140 [email protected] Abstract Extra-linguistic factors influence language use, and are accounted for by speakers and listeners. Most natural language processing (NLP) tasks to date, however, treat language as uniform. This assumption can harm performance. We investigate the effect of including demographic information on performance in a variety of text-classification tasks. We find that by including age or gender information, we consistently and significantly improve performance over demographic-agnostic models. These results hold across three text-classification tasks in five languages. 1 Introduction When we use language, we take demographic factors of the speakers into account. In other words, we do have certain expectations as to who uses “super cute,” “rather satisfying,” or “rad, dude.” Sociolinguistics has long since studied the interplay between demographic factors and language use (Labov, 1964; Milroy and Milroy, 1992; Holmes, 1997; Macaulay, 2001; Macaulay, 2002; Barbieri, 2008; Wieling et al., 2011; Rickford and Price, 2013, inter alia).1 These factors greatly influence word choice, syntax, and even semantics. In natural language processing (NLP), however, we have largely ignored demographic factors, and treated language as a uniform medium. It was irrelevant, (and thus not modeled) whether a text was produced by a middle-aged man, an elderly lady, or a teenager. These three groups, however, differ along a whole host of demographic axes, and these differences are reflected in their language use. 1Apart from the demographic factors, other factors such as mood, interpersonal relationship, authority, language attitude, etc. contribute to our perception of language. A model that is agnostic to demographic differences will lose these distinctions, and performance suffers whenever the model is applied to a new demographic. Historically, the demographics of training and test data (newswire) were relatively homogenous, language was relatively uniform, and information the main objective. Under these uniform conditions, the impact of demographics on performance was small. Lately, however, NLP is increasingly applied to other domains, such as social media, where language is less canonical, demographic information about the author is available, and the authors’ goals are no longer purely informational. The influence of demographic factors in this medium is thus much stronger than on the data we have traditionally used to induce models. The resulting performance drops have often been addressed via various domain adaptation approaches (Blitzer et al., 2006; Daume III and Marcu, 2006; Reichart and Rappoport, 2007; Chen et al., 2009; Daum´e et al., 2010; Chen et al., 2011; Plank and Moschitti, 2013; Plank et al., 2014; Hovy et al., 2015b, inter alia). However, the authors and target demographics of social media differ radically from those in newswire text, and domain might in some case be a secondary effect to demographics. In this paper, we thus ask whether we also need demographic adaptation. Concretely, we investigate 1. how we can encode demographic factors, and 2. what effect they have on the performance of text-classification tasks We focus on age and gender, and similarly to Bamman et al. (2014a), we use distributed word representations (embeddings) conditioned on these demographic factors (see Section 2.1) to incorporate the information. We evaluate the effect of demographic information on classification performance in three NLP 752 tasks: sentiment analysis (Section 2.2), topic detection (Section 2.3), and author attribute classification (Section 2.4). 2 We compare F1-performance of classifiers a) trained with access to demographic information, or b) under agnostic conditions. We find that demographic-aware models consistently outperform their agnostic counterparts in all tasks. Our contributions We investigate the effect of demographic factors on classification performance. We show that NLP systems benefit from demographic awareness, i.e., that information about age and gender can lead to significant performance improvements in three different NLP tasks across five different languages. 2 Data We use data from an international user review website, Trustpilot. It contains information both about the review (text and star rating), as well as the reviewer, in form of a profile. The profile included a screen name, and potentially information about gender and birth year. Since demographic factors are extra-linguistic, we assume that the same effects hold irrespective of language. To investigate this hypothesis, we use data from several languages (Danish, French, and German) and varieties (American English, British English). We use data from the countries with most users, i.e., Great Britain, Denmark, Germany, France, and the US. The selection was made based on the availability of sufficient amounts of training data (see Table 1 for more details). The high number of users in Denmark (one tenth of the country’s population) might be due to the fact that Trustpilot is a Danish company and thus existed there longer than in other countries. Danish users also provide (in relative terms) more information about themselves than users of any other country, so that even in absolute numbers, there is oftentimes more information available than for larger countries like France or Germany, where users are more reluctant to disclose information. While most of this profile information is voluntary, we have good coverage for both age and 2We selected these tasks to represent a range of textclassification applications, and based on the availability of suitable data with respect to target and demographic variables. USERS AGE GENDER PLACE ALL UK 1,424k 7% 62% 5% 4% France 741k 3% 53% 2% 1% Denmark 671k 23% 87% 17% 16% US 648k 8% 59% 7% 4% Germany 329k 8% 47% 6% 4% Table 1: Number of users and % per variable per country (after applying augmentations). gender. In case of missing gender values, we base a guess on the first name (if given), by choosing the gender most frequently associated with that name in the particular language. We do require that one gender is prevalent (accounting for 95% of all mentions), and that there is enough support (at least 3 attributed instances), though. For age, coverage is less dense, so the resulting data sets are smaller, but still sufficient. For more information on Trustpilot as a resource, see Hovy et al. (2015a). We split each review into sentences, tokenize, replace numbers with a 0, lowercase the data, and join frequent bigrams with an underscore to form a single token. For each language, we collect four sub-corpora, namely two for gender (male and female) and two for age (under 35 and over 45). The subcorpora for the discrete variable gender are relatively straightforward (although see (Bamman et al., 2014b)), but the split for the continuous age variable are less clear. While the effect of age on language use is undisputed (Barke, 2000; Barbieri, 2008; Rickford and Price, 2013), providing a clear cut-off is hard. We therefore use age ranges that result in roughly equally sized data sets for both groups, and that are not contiguous. For each independent variable (age and gender), we induce embeddings for the two sub-groups (see section 2.1), as well as a “mixed” setting. We also extract labeled data for each task (see sections 2.2, 2.3, and 2.4). Each of these data sets is randomly split into training and test data, 60:40. Note that we do not set any parameters on development data, but instead use off-the-shelf software with default parameters for classification. Table 2 gives an overview of the number of training and test instances for each task and both variables (gender and age). Note that this setup is somewhat artificial: the vocabulary of the embeddings can subsume the 753 GENDER AGE TASK COUNTRY TRAIN TEST TRAIN TEST TOPIC Denmark 72.48k 48.32k 26.89k 17.93k France 33.34k 22.23k 3.67k 2.45k Germany 18.35k 12.23k 4.82k 3.22k UK 110.40k 73.60k 13.26k 8.84k US 36.95k 24.63k 7.25k 4.84k SENTIMENT Denmark 150.29k 100.19k 45.18k 30.12k France 40.38k 26.92k 3.94k 2.63k Germany 17.35k 11.57k 3.52k 2.35k UK 93.98k 62.65k 15.80k 10.53k US 43.36k 28.91k 3.90k 2.60k ATTRIBUTES Denmark 180.31k 120.20k 180.31k 120.20k France 10.69k 7.12k 10.69k 7.12k Germany 11.47k 7.64k 11.47k 7.64k UK 70.87k 47.25k 70.87k 47.25k US 28.10k 18.73k 28.10k 18.73k total 918.32k 612.20k 429.66k 286.43k Table 2: Number of sentences per task for gender and age as independent variable vocabulary of the tasks (there is some loss due to frequency cut-offs in word2vec). The out-ofvocabulary rate on the tasks is thus artificially low and can inflate results. In a standard “improvement over baseline”-setup, this would be problematic. However, the results should not be interpreted with respect to their absolute value on the respective tasks, but with respect to the relative differences. 2.1 Conditional Embeddings COUNTRY AGE GENDER Denmark 495k 1.6m France 36k 490k Germany 47k 211k UK 232k 1.63m US 70k 576k total 880k 4.51m Table 3: Number of sentences used to induce embeddings Embeddings are distributed representations of words in a vector space, capturing syntactic and semantic regularities among the words. We learn our word embeddings by using word2vec3 (Mikolov et al., 2013) on unlabeled review data. Our corpora are relatively small, compared to the language modeling tasks the tool was developed for (see Table 3 for the number of instances used for each language and variable). We thus follow the suggestions in the word2vec documentation and use the skip-gram model and hierarchical softmax rather than the standard continuous-bag-ofwords model. This setting penalizes low-frequent words less. All out-of-vocabulary (OOV) words are replaced with an “unknown” token, which is represented as the averaged vector over all other words. In this paper, we want to use embeddings to capture group-specific differences. We therefore train embeddings on each of the sub-corpora (e.g., male, female, and U35, O45) separately. As comparison, we create a mixed setting. For each variable, we combine half of both sub-corpora (say, men and women) to form a third corpus with no demographic distinction. We also train embeddings on this data. This setting assumes that there are no demographic differences, which is the common approach in NLP to date. Since embeddings depend crucially on the 3https://code.google.com/p/word2vec/ 754 size of the available training data, and since we want to avoid modeling size effects, we balance the three corpora we use to induce embeddings such that all three contain the same number of instances.4 Note that while we condition the embeddings on demographic variables, they are not task-specific. While general-purpose embeddings are widely used in the NLP community, task-specific embeddings are known to lead to better results for various tasks, including sentiment analysis (Tang et al., 2014). Inducing task-specific embeddings carries the risk of overfitting to a task and data set, though, and would make it harder to attribute performance differences to demographic factors. Since we are only interested in the relative difference between demographic-aware and unaware systems, not in the absolute performance on the tasks, we do not use task-specific embeddings. 2.2 Sentiment Analysis Sentiment analysis is the task of determining the polarity of a document. In our experiments, we use three polarity values: positive, negative, and neutral. To collect data for the sentiment analysis task, we select all reviews that contain the target variable (gender or age), and a star-rating. Following previous work on similar data (Blitzer et al., 2007; Hardt and Wulff, 2012; Elming et al., 2014), we use one, three, or five star ratings, corresponding to negative, neutral, and positive sentiment, respectively. We balance the data sets so that both training and test set contain equal amounts of all three labels. We do this in order to avoid demographicspecific label distributions (women and people over 45 tend to give more positive ratings than men and people under 35, see Section 3.1). 2.3 Topic Identification Topic identification is the task of assigning a highlevel concept to a document that captures its content. In our case, the topic labels are taken from the Trustpilot taxonomy for companies (e.g., Electronics, Pets, etc.). Again, there is a strong gender bias: the most common topic for men is Computer & Accessories, the most common topic among women is Pets. There is thus considerably less overlap between the groups than for the other 4Note, however, that the vocabulary sizes still vary among languages and between age and gender. tasks. In order not to model gender-specific topic bias and to eliminate topic frequency as a confounding factor, we restrict ourselves to the five most frequent labels that occur in both groups. We also ensure that we have the same number of examples for each label in both groups. However, in the interest of data size, we do not enforce a uniform distribution over the five labels (i.e., the classes are not balanced). 2.4 Author Attribute Identification Author attribute identification is the task of inferring demographic factors from linguistic features (Alowibdi et al., 2013; Ciot et al., 2013; Liu and Ruths, 2013). It is often used in author profiling (Koppel et al., 2002) and stylometrics (Goswami et al., 2009; Sarawgi et al., 2011). Rosenthal and McKeown (2011) have shown that these attributes are correlated. In this paper, we restrict ourselves to using gender to predict age, and age to predict gender. This serves as an additional test case. Again, we balance the class labels to minimize the effect of any confounding factors. 3 Experiments 3.1 Data Analysis Before we analyze the effect of demographic differences on NLP performance, we investigate whether there is an effect on the non-linguistic correlates, i.e., ratings and topics. To measure the influence of demographic factors on these values, we quantify the distributions over the three sentiment labels and the five topic labels. We analyze both gender and age groups separately, but in the interest of space average across all languages. negative neutral positive 0 20 40 60 80 100 male female Figure 1: Label distribution for gender 755 negative neutral positive 0 10 20 30 40 50 60 70 80 90 U35 O45 Figure 2: Label distribution for age groups Figures 1 and 2 show the distributions over sentiment labels. We note that men give more negative and fewer positive ratings than women. The same holds for people in the younger group, who are more skewed towards negative ratings than people in the older group. While the differences are small, they suggest that demographics correlate with rating behavior have a measurable effect on model performance. The gender distributions over categories exhibit a very different tendency. Table 3 shows that the review categories (averaged over all languages) are highly gender-specific. With the exception of Hotels and Fashion Accessories, the two distributions are almost bimodal opposites. However, they are still significantly correlated (Spearman ρ is 0.49 at p < 0.01). The difference in the two distributions illustrates why we need to control for topic frequency in our experiments. 3.2 Models Classifiers For all tasks, we use logistic regression models5 with standard parameter settings. In order to isolate the effect of demographic differences on performance in all text classification tasks, we need to represent variable length documents based only upon the embeddings of the words they contain. We follow Tang et al. (2014) in using convolutional layers over word embeddings (Collobert et al., 2011) to generate fixed-length input representations. Figure 4 schematically shows the procedure for the minimum of a 4-dimensional toy 5http://scikit-learn.org/stable/ modules/generated/sklearn.linear_model. LogisticRegression.html example. For each instance, we collect five Ndimensional statistics over the t by N input matrix, where N is the dimensionality of the embeddings (here: 100), and t is the sentence length in words. From the matrix representation, we compute the dimension-wise minimum, maximum, and mean representation, as well as one standard deviation above and below the mean. We then concatenate those five 100-dimensional vectors to a 500dimensional vector thats represents each instance (i.e., review) as input to the logistic regression classifier. Taking the maximum and minimum across all embedding dimensions is equivalent to representing the exterior surface of the “instance manifold” (the volume in embedding space within which all words in the instance reside). Adding the mean and standard deviation summarizes the density per-dimension within the manifold. This way, we can represent any input sentence solely based on the embeddings, and with the same feature vector dimensionality. that was cool 0.1 0.8 0.2 0.4 0.4 0.5 0.6 0.3 0.9 0.6 0.7 0.2 0.1 0.5 0.2 0.2 min() 0.9 0.8 0.7 0.4 max() 0.46 0.63 0.5 0.3 mean() 0.14 0.5 0.28 0.22 -std() 0.8 0.76 0.72 0.38 +std() Figure 4: Example for deriving embedding statistics from sentence in 4-dimensional space. Minimum shaded The approach is the same for all three tasks, and we did not tune any parameters to maximize performance. The results are thus maximally comparable to each other, albeit far from state-of-the-art. Overall performance could be improved with taskspecific features and more sophisticated models, but it would make the results less comparable, and complicate identifying the source of performance differences. We leave this for future research. Comparison In order to compare demographicaware and agnostic models, we use the following setup for each task and language: 1. In the “agnostic” setting, we train a logisticregression model using the joint embeddings (i.e., embeddings induced on the corpus containing both sub-groups, e.g. male and fe756 Pets Clothes_&_Fashion Beauty_and_Wellness Gifts Flowers Decoration_and_Interior_Design Books Drugs_&_Pharmacy Food_&_Beverage Contact_Lenses Travel_Aggregator Cell_phone_Recycling Domestic_Appliances Flights Home_&_Garden Online_Marketplace Airport_Parking Art_Supplies Electrical_goods Ink_Cartridges Fashion_Accessories Bathroom Hotels Electronics Wine Fitness_&_Nutrition Car_Rental Tires Computer_&_Accessories Car_lights 0.00 0.02 0.04 0.06 0.08 0.10 0.12 0.14 male female Figure 3: Distribution of the 30 most frequent categories per gender over all languages male) and group-agnostic training data (i.e., data that contains an equal amount of instances from either sub-group). 2. In the demographic-aware setting, we train a logistic-regression model for each of the two sub-groups (e.g., male and female). For each sub-group, we use the group-specific embeddings (i.e., embeddings induced on, say, male data) and group-specific training data (i.e., instances collected from male data). We measure F1-performance for both settings (agnostic and demographic-aware) on the test set. The test data contains an equal amount of instances from both sub-groups (say, male and female). We use the demographic-aware classifier appropriate for each instance (e.g., male classifier for male instances), i.e., we assume that the model has access to this information. For many user-generated content settings, this is realistic, since demographic information is available. However, we only predict the target variable (sentiment, topic, or author attribute). We do not require the model to predict the sub-group (age or gender group). We assume that demographic factors hold irrespective of language. We thus compute a macro-F1 over all languages. Micro-F1 would favor languages for which there is more data available, i.e., performance on those languages would dominate the average performance. Since we do not want to ascribe more importance to any particular language, macro-F1 is more appropriate. Even if there is a difference in performance between the agnostic and aware settings, this difference could still be due to the specific data set. In order to test whether the difference is also statistically significant, we use a bootstrap-sampling test. In a bootstrap-sampling test, we sample subsets of the predictions of both settings (with replacement) 10,000 times. For each sample, we measure F1 of both systems, and compare the winning system of the sample to the winning system on the entire data set. The number of times 757 SENTIMENT ANALYSIS TOPIC CLASSIFICATION AGE CLASSIFICATION COUNTRY AGNOSTIC AWARE AGNOSTIC AWARE AGNOSTIC AWARE Denmark 61.75 ∗62.00 49.19 ∗50.08 59.94 ∗60.22 France 61.21 61.09 38.45 ∗39.33 53.85 54.21 Germany 60.50 61.36 60.45 61.11 60.19 60.20 UK 65.22 65.12 66.02 66.26 59.78 ∗60.35 US 60.94 61.24 65.64 65.37 61.97 62.68 avg 61.92 62.16 55.95 56.43 59.15 59.53 Table 4: F1 for gender-aware and agnostic models on tasks. Averages are macro average. ∗: p < 0.05 the sample winner differs from the entire data set, divided by 10, 000, is the reported p-value. Bootstrap-sampling essentially simulates runs of the two systems on different data sets. If one system outperforms the other under most of these conditions (i.e., the test returns a low p-value), we can be reasonably sure that the difference is not due to chance. As discussed in Berg-Kirkpatrick et al. (2012) and Søgaard et al. (2014), this test is the most appropriate for NLP data, since it does not make any assumptions about the underlying distributions, and directly takes performance into account. Note that the test still depends on data size, though, so that small differences in performance on larger data sets can be significant, while larger differences on small sets might not. We test for significance with the standard cutoff of p < 0.05. However, even under a bootstrapsampling test, we can only limit the number of likely false positives. If we run enough tests, we increase the chance of reporting a type-I error. In order to account for this effect, we use Bonferroni corrections for each of the tasks. 4 Results For each task, we compare the demographic-aware setting to an agnostic setting. The latter is equivalent to the currently common approach in NLP. For each task and language, the setting with the higher performance is marked in bold. Statistically significant differences (at p < 0.05) are marked with a star (∗). Note that for the macro-averaged scores, we cannot perform bootstrap significance testing. 4.1 Gender Table 4 shows the F1 scores for the different tasks. In the left column of each task (labeled AGNOSTIC), the system is trained on embeddings and data from both genders, in the same ratios as in the test data. This column is similar to the configuration normally used in NLP to date, where – at least in theory – data comes from a uniformly distributed sample. In the right column (labeled AWARE), the classification is based on the classifier trained on embeddings and data from the respective gender. While the improvements are small, they are consistent. We do note some variance in consistency across tasks. The largest average improvement among the three tasks is on topic classification. This improvement is interesting, since we have seen stark differences for the topic distribution between genders. Note, however that we controlled for this factor in our experiments (cf. Table 3). The results thus show that taking gender into account improves topic classification performance even after controlling for prior topic distribution as a confounding factor. The improvements in age classification are the most consistent. This consistency is likely due to the fact that author attributes are often correlated. The fact that the attributes are related can be exploited in stacking approaches, where the attributes are predicted together. Analyzing the errors, the misclassifications for sentiment analysis (the weakest task) seem to be system-independent. Mistakes are mainly due to the simplicity of the system. Since we do not explicitly model negation, we incur errors such as “I will never order anywhere else again” classified as negative, even though it is in fact rather positive. 758 SENTIMENT ANALYSIS TOPIC CLASSIFICATION GENDER CLASSIFICATION COUNTRY AGNOSTIC AWARE AGNOSTIC AWARE AGNOSTIC AWARE Denmark 58.74 59.12 45.11 46.00 58.82 58.97 France 53.50 53.40 43.54 42.64 54.64 54.24 Germany 51.91 52.83 ∗56.91 55.41 54.04 54.51 UK 59.72 ∗60.83 59.40 ∗60.88 57.69 ∗58.25 US 55.57 56.00 61.14 61.38 60.05 60.97 avg 55.89 56.44 53.22 53.26 57.05 57.59 Table 5: F1 for age-aware and agnostic models on tasks. Averages are macro average. ∗: p < 0.05 4.2 Age Table 5 presents the results for systems with age as independent demographic variable. Again, we show the difference between the agnostic and age-aware setting in parallel columns for each task. The improvements are similar to the ones for gender. The smaller magnitude across tasks indicates that knowledge of age offers less discriminative power than knowledge of gender. This in itself is an interesting result, suggesting that the age gap is much smaller than the gender gap when it comes to language variation (i.e., older people’s language is more similar to younger people than the language of men is to women). The difference between groups could be a domain-effect, though, caused by the fact that all subjects are using a form of “reviewese” when leaving their feedback. Why this effect would be more prevalent across ages than across genders is not obvious from the data. When averaged over all languages, the ageaware setup again consistently outperforms the agnostic setup, as it did for gender. While the final numbers are lower than in the gender setting, average improvements tend to be just as decisive. 5 Related Work Most work in NLP that has dealt with demographic factors has either a) looked at the correlation of socio-economic attributes with linguistic features (Eisenstein et al., 2011; Eisenstein, 2013a; Eisenstein, 2013b; Doyle, 2014; Bamman et al., 2014a; Eisenstein, to appear), or b) used linguistic features to infer socio-economic attributes (Rosenthal and McKeown, 2011; Nguyen et al., 2011; Alowibdi et al., 2013; Ciot et al., 2013; Liu and Ruths, 2013; Bergsma et al., 2013; Volkova et al., 2015). Our approach is related to the work by Eisenstein (2013a) and Doyle (2014), in that we investigate the influence of extralinguistic factors. Both of them work on Twitter and use geocoding information, whereas we focus on age and gender. Also, rather than correlating with census-level statistics, as in (Eisenstein et al., 2011; Eisenstein, 2013a; Eisenstein, to appear), we take individual information of each author into account. Volkova et al. (2013) also explore the influence of gender and age on text-classification. They include demographic-specific features into their model and show improvements on sentiment analysis in three languages. Our work extends to more languages and three different text-classification tasks. We also use word representations trained on corpora from the various demographic groups, rather than incorporating the differences explicitly as features in our model. Recently, Bamman et al. (2014a) have shown how regional lexical differences (i.e., situated language) can be learned and represented via distributed word representations (embeddings). They evaluate the conditional embeddings intrinsically, to show that the regional representatives of sports teams, parks, etc. are more closely associated with the respective hypernyms than other representatives. We also use embeddings conditioned on demographic factors (age and gender instead of location), but evaluate their effect on performance extrinsically, when used as input to an NLP system, rather than intrinsically (i.e., for discovering correlations between language use and demographic statistics). Tang et al. (2014) learn embeddings for sentiment analysis by splitting up their data by rating. 759 We follow their methodology in using embeddings to represent variable length inputs for classification. The experiments on author attribute identification are inspired by a host of previous work (Rosenthal and McKeown, 2011; Nguyen et al., 2011; Alowibdi et al., 2013; Ciot et al., 2013; Liu and Ruths, 2013; Volkova et al., 2015, inter alia). The main difference is that we use embeddings trained on another demographic variable rather than n-gram based features, and that our goal is not to build a state-of-the-art system. 6 Discussion The results in Section 4 have shown that incorporating information on age and gender improves performance across a host of text-classification tasks. Even though the improvements are small and vary from task to task, they hold consistently across three tasks and languages. The magnitude of the improvements could be improved by using task-specific embeddings, additional features, and more sophisticated models. This would obscure the influence of the individual factors, though. The observed improvements are solely due to the fact that different demographic groups use language quite differently. Sociolinguistic research suggests that younger people and women tend to be more creative in their language use than men and older groups. The former are thus often the drivers of language change (Holmes, 2013; Nguyen et al., 2014). Modeling language as uniform loses these distinctions, and thus causes performance drops. As NLP systems are increasingly used for business intelligence and decision making, systematic performance differences carry the danger of disadvantaging minority groups whose language use differs from the norm. 7 Conclusion In this paper, we investigate the influence of age and gender on topic identification, sentiment analysis, and author attribute identification. We induce embeddings conditioned on the respective demographic variable and use those embeddings as sole input to classifiers to build both demographicagnostic and aware models. We evaluate our models on five languages. Our results show that the models using demographic information perform on average better than the agnostic models. The improvements are small, but consistent, and in 8/30 cases, also statistically significant at p < 0.05, according to bootstrap sampling tests. The results indicate that NLP systems can improve classification performance by incorporating demographic information, where available. In most of situated texts (social media, etc.), this is the case. While the improvements vary among tasks, the results suggest that similar to domain adaptation, we should start addressing the problem of demographic adaptation in NLP. Acknowledgements Thanks to ˇZeljko Agi´c, David Bamman, Jacob Eisenstein, Stephan Gouws, Anders Johannsen, Barbara Plank, Anders Søgaard, and Svitlana Volkova for their invaluable feedback, as well as to the anonymous reviewers, whose comments helped improve the paper. The author was supported under ERC Starting Grant LOWLANDS No. 313695. References Jalal S Alowibdi, Ugo A Buy, and Philip Yu. 2013. Empirical evaluation of profile characteristics for gender classification on twitter. In Machine Learning and Applications (ICMLA), 2013 12th International Conference on, volume 1, pages 365–369. IEEE. David Bamman, Chris Dyer, and Noah A. Smith. 2014a. Distributed representations of geographically situated language. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 828–834. Proceedings of ACL. David Bamman, Jacob Eisenstein, and Tyler Schnoebelen. 2014b. Gender identity and lexical variation in social media. Journal of Sociolinguistics, 18(2):135–160. Federica Barbieri. 2008. Patterns of age-based linguistic variation in American English. Journal of sociolinguistics, 12(1):58–88. Andrew J Barke. 2000. The Effect of Age on the Style of Discourse among Japanese Women. In Proceedings of the 14th Pacific Asia Conference on Language, Information and Computation, pages 23–34. Taylor Berg-Kirkpatrick, David Burkett, and Dan Klein. 2012. An empirical investigation of statistical significance in NLP. In Proceedings of EMNLP. Shane Bergsma, Mark Dredze, Benjamin Van Durme, Theresa Wilson, and David Yarowsky. 2013. 760 Broadly improving user classification via communication-based name and location clustering on twitter. In HLT-NAACL, pages 1010–1019. John Blitzer, Ryan McDonald, and Fernando Pereira. 2006. Domain adaptation with structural correspondence learning. In Proceedings of EMNLP. John Blitzer, Mark Dredze, and Fernando Pereira. 2007. Biographies, Bollywood, Boom-boxes and Blenders: Domain Adaptation for Sentiment Classification. In Proceedings of ACL. Bo Chen, Wai Lam, Ivor Tsang, and Tak-Lam Wong. 2009. Extracting discriminative concepts for domain adaptation in text mining. In KDD. Minmin Chen, Killiang Weinberger, and John Blitzer. 2011. Co-training for domain adaptation. In NIPS. Morgane Ciot, Morgan Sonderegger, and Derek Ruths. 2013. Gender inference of twitter users in nonenglish contexts. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, Wash, pages 18–21. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. The Journal of Machine Learning Research, 12:2493–2537. Hal Daum´e, Abhishek Kumar, and Avishek Saha. 2010. Frustratingly easy semi-supervised domain adaptation. In ACL Workshop on Domain Adaptation for NLP. Hal Daume III and Daniel Marcu. 2006. Domain adaptation for statistical classifiers. Journal of Artificial Intelligence Research, 26:101–126. Gabriel Doyle. 2014. Mapping dialectal variation by querying social media. In EACL. Jacob Eisenstein, Noah Smith, and Eric Xing. 2011. Discovering sociolinguistic associations with structured sparsity. In Proceedings of ACL. Jacob Eisenstein. 2013a. Phonological factors in social media writing. In Workshop on Language Analysis in Social Media, NAACL. Jacob Eisenstein. 2013b. What to do about bad language on the internet. In Proceedings of NAACL. Jacob Eisenstein. to appear. Systematic patterning in phonologically-motivated orthographic variation. Journal of Sociolinguistics. Jakob Elming, Barbara Plank, and Dirk Hovy. 2014. Robust cross-domain sentiment analysis for lowresource languages. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis, pages 2–7, Baltimore, Maryland, June. Association for Computational Linguistics. Sumit Goswami, Sudeshna Sarkar, and Mayur Rustagi. 2009. Stylometric analysis of bloggers’ age and gender. In Third International AAAI Conference on Weblogs and Social Media. Daniel Hardt and Julie Wulff. 2012. What is the meaning of 5*’s? an investigation of the expression and rating of sentiment. In Empirical Methods in Natural Language Processing, page 319. Janet Holmes. 1997. Women, language and identity. Journal of Sociolinguistics, 1(2):195–223. Janet Holmes. 2013. An introduction to sociolinguistics. Routledge. Dirk Hovy, Anders Johannsen, and Anders Søgaard. 2015a. User review-sites as a source for large-scale sociolinguistic studies. In Proceedings of WWW. Dirk Hovy, Barbara Plank, H´ector Mart´ınez Alonso, and Anders Søgaard. 2015b. Mining for unambiguous instances to adapt pos taggers to new domains. In Proceedings of NAACL-HLT. Moshe Koppel, Shlomo Argamon, and Anat Rachel Shimoni. 2002. Automatically categorizing written texts by author gender. Literary and Linguistic Computing, 17(4):401–412. William Labov. 1964. The social stratification of English in New York City. Ph.D. thesis, Columbia university. Wendy Liu and Derek Ruths. 2013. What’s in a name? using first names as features for gender inference in twitter. In Analyzing Microtext: 2013 AAAI Spring Symposium. Ronald Macaulay. 2001. You’re like ‘why not?’ the quotative expressions of glasgow adolescents. Journal of Sociolinguistics, 5(1):3–21. Ronald Macaulay. 2002. Extremely interesting, very interesting, or only quite interesting? adverbs and social class. Journal of Sociolinguistics, 6(3):398– 417. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Lesley Milroy and James Milroy. 1992. Social network and social class: Toward an integrated sociolinguistic model. Language in society, 21(01):1– 26. Dong Nguyen, Noah A Smith, and Carolyn P Ros´e. 2011. Author age prediction from text using linear regression. In Proceedings of the 5th ACLHLT Workshop on Language Technology for Cultural Heritage, Social Sciences, and Humanities, pages 115–123. Association for Computational Linguistics. 761 Dong Nguyen, Dolf Trieschnigg, A. Seza Dogru¨oz, Rilana Gravel, Mariet Theune, Theo Meder, and Franciska De Jong. 2014. Predicting Author Gender and Age from Tweets: Sociolinguistic Theories and Crowd Wisdom. In Proceedings of COLING 2014. Barbara Plank and Alessandro Moschitti. 2013. Embedding semantic similarity in tree kernels for domain adaptation of relation extraction. In Proceedings of ACL. Barbara Plank, Dirk Hovy, Ryan McDonald, and Anders Søgaard. 2014. Adapting taggers to twitter with not-so-distant supervision. In Proceedings of COLING. COLING. Roi Reichart and Ari Rappoport. 2007. Self-training for enhancement and domain adaptation of statistical parsers trained on small datasets. In Proceedings of ACL. John Rickford and Mackenzie Price. 2013. Girlz ii women: Age-grading, language change and stylistic variation. Journal of Sociolinguistics, 17(2):143– 179. Sara Rosenthal and Kathleen McKeown. 2011. Age prediction in blogs: A study of style, content, and online behavior in pre-and post-social media generations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 763– 772. Association for Computational Linguistics. Ruchita Sarawgi, Kailash Gajulapalli, and Yejin Choi. 2011. Gender attribution: tracing stylometric evidence beyond topic and genre. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 78–86. Association for Computational Linguistics. Anders Søgaard, Anders Johannsen, Barbara Plank, Dirk Hovy, and H´ector Mart´ınez Alonso. 2014. What’s in a p-value in nlp? In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 1–10, Ann Arbor, Michigan, June. Association for Computational Linguistics. Duyu Tang, Furu Wei, Nan Yang, Ming Zhou, Ting Liu, and Bing Qin. 2014. Learning sentimentspecific word embedding for twitter sentiment classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 1555–1565. Svitlana Volkova, Theresa Wilson, and David Yarowsky. 2013. Exploring demographic language variations to improve multilingual sentiment analysis in social media. In Proceedings of EMNLP, pages 1815–1827. Svitlana Volkova, Yoram Bachrach, Michael Armstrong, and Vijay Sharma. 2015. Inferring latent user properties from texts published in social media (demo). In Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (AAAI), Austin, TX, January. Martijn Wieling, John Nerbonne, and R Harald Baayen. 2011. Quantitative social dialectology: Explaining linguistic variation geographically and socially. PloS one, 6(9):e23613. 762
2015
73
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 763–773, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Vector-space calculation of semantic surprisal for predicting word pronunciation duration Asad Sayeed, Stefan Fischer, and Vera Demberg Computational Linguistics and Phonetics/M2CI Cluster of Excellence Saarland University 66123 Saarbr¨ucken, Germany {asayeed,sfischer,vera}@coli.uni-saarland.de Abstract In order to build psycholinguistic models of processing difficulty and evaluate these models against human data, we need highly accurate language models. Here we specifically consider surprisal, a word’s predictability in context. Existing approaches have mostly used n-gram models or more sophisticated syntax-based parsing models; this largely does not account for effects specific to semantics. We build on the work by Mitchell et al. (2010) and show that the semantic prediction model suggested there can successfully predict spoken word durations in naturalistic conversational data. An interesting finding is that the training data for the semantic model also plays a strong role: the model trained on indomain data, even though a better language model for our data, is not able to predict word durations, while the out-ofdomain trained language model does predict word durations. We argue that this at first counter-intuitive result is due to the out-of-domain model better matching the “language models” of the speakers in our data. 1 Introduction The Uniform Information Density (UID) hypothesis holds that speakers tend to maintain a relatively constant rate of information transfer during speech production (e.g., Jurafsky et al., 2001; Aylett and Turk, 2006; Frank and Jaeger, 2008). The rate of information transfer is thereby quantified using as each words’ Surprisal (Hale, 2001), that is, a word’s negative log probability in context. Surprisal(wi) = −log P(wi|w1..wi−1) This work makes use of an existing measure of semantic surprisal calculated from a distributional space in order to test whether this measure accounts for an effect of UID on speech production. Our hypothesis is that a word in a semantically surprising context is pronounced with a slightly longer duration than the same word in a semantically less-expected context. In this way, a more uniform rate of information transfer is achieved, because the higher information content of the unexpected word is stretched over a slightly longer time. To our knowledge, the use of this form of surprisal as a pronunciation predictor has never been investigated. The intuition is thus: in a sentence like the sheep ate the long grass, the word grass will have relatively high surprisal if the context only consists of the long. However, a distributional representation that retains the other content words in the sentence, thus representing the contextual similarity of grass to sheep ate, would able to capture the relevant context for content word prediction more easily. In the approach taken here, both types of models are combined: a standard language model is reweighted with semantic similarities in order to capture both short- and more long-distance dependency effects within the sentence. The semantic surprisal model, a reimplementation of Mitchell (2011), uses a word vector w and a history or context vector h to calculate the language model p(w|h), defining this probability in vector space via cosine similarity. Words that have a higher distributional similarity to their context are thus represented as having a higher probability than words that do not. Thus, we calculate probabilities for words in the context of a sentence in a framework of distributional semantics. Regarding our main hypothesis—that speakers adapt their speech rate as a function of a word’s information content—it is particularly important to 763 us to test this hypothesis on fully “natural” conversational data. Therefore, we use the AMI corpus, which contains transcripts of English-language conversations with orthographically correct transcriptions and precise word pronunciation boundaries in terms of time. We will explain the calculation of semantic surprisal in section 4 (this is so far only described in Mitchell’s 2011 PhD thesis), and then evaluate the effect of an in-domain semantic surprisal model in section 7. Next, we will compare this to the effect of an out-of-domain semantic surprisal model in section 8. The hypothesis is only confirmed for the out-of-domain model, which we argue is due to this model being more similar to the speaker’s internal “model” than the in-domain model. 2 Background 2.1 Surprisal and UID Surprisal is defined in terms of the negative logarithm of the probability of a word in context: S(w) = −log P(w|context), where P(w|context) is the probability of a word given its previous (linguistic) context. It is a measure of information content in which a high surprisal implies low predictability. The use of surprisal in psycholinguistic research goes back to Hale (2001), who used a probabilistic Earley Parser to model the difficulty in parsing so-called garden path sentences (e.g. “The horse raced past the barn fell”), wherein the unexpectedness of an upcoming word or structure influences the language processor’s difficulty. Recent work in psycholinguistics has provided increasing support (e.g., Levy (2008); Demberg and Keller (2008); Smith and Levy (2013); Frank et al. (2013)) for the hypothesis that the surprisal of a word is proportional to the processing difficulty (measured in terms of reading times and EEG event-related potentials) it causes to a human. The Uniform Information Density (UID) hypothesis (Frank and Jaeger, 2008) holds that speakers tend distribute information uniformly across an utterance (in the limits of grammaticality). Information density is quantified in terms of the surprisal of each word (or other linguistic unit) in the utterance. These notions go back to Shannon (1948), who showed that conveying information uniformly close to channel capacity is optimal for communication through a (noisy) communication channel. Frank and Jaeger (2008) investigated UID effects in the SWITCHBOARD corpus at a morphosyntactic level wherein speakers avoid using English contracted forms (“you are” vs. “you’re”) when the contractible phrase is also transmitting a high degree of information in context. In this case, n-gram surprisal was used as the information density measure. Related hypotheses have been suggested by Jurafsky et al. (2001), who related speech durations to bigram probabilities on the Switchboard corpus, and Aylett and Turk (2006), who investigated information density effects at the syllable level. They used a read-aloud English speech synthesis corpus, and they found that there is an inverse relationship between the pronunciation duration and the N-gram predictability. Demberg et al. (2012) also use the AMI corpus used in this work, and show that syntactic surprisal (i.e., the surprisal estimated from Roark’s (2009) PCFG parser) can predict word durations in natural speech. Our work expands upon the existing efforts in demonstrating the UID hypothesis by applying surprisal to the level of lexical semantics. 2.2 Distributional semantics Given a means of evaluating the similarity of linguistic units (e.g., words, sentences, texts) in some numerical space that represents the contexts in which they appear, it is possible to approximate the semantics in distributional terms. This is usually done by collecting statistics from a corpus using techniques developed for information retrieval. Using these statistics as a model of semantics is justified in terms of the “distributional hypothesis”, which holds that words used in similar contexts have similar meanings (Harris, 1954). A simple and widely-used type of distributional semantic model is the vector space model (Turney and Pantel, 2010). In such a model, all words are represented each in terms of vectors in a single high-dimensional space. The semantic similarity of words can then be calculated via the cosine of the angle between the vectors in this manner: cos(ϕ) = ⃗a·⃗b |⃗a||⃗b|. Closed-class function words are usually excluded from this calculation. Until relatively recently (Erk, 2012), distributional semantic models did not take into account the finegrained details of syntactic and semantic structure construed in formal terms. 764 3 Corpus The AMI Meeting Corpus (Carletta, 2007) is a multimodal English-language corpus. It contains videos and transcripts of simulated workgroup meetings accompanied by various kinds of annotations. The corpus is available along with its annotations under a free license1. Two-thirds of the videos contain simulated meetings of 4-person design teams assigned to talk about the development of a fictional television remote control. The remaining meetings discuss various other topics. The majority of speakers were non-native speakers of English, although all the conversations were held in English. The corpus contains about 100 hours of material. An important characteristic of this corpus for our work is that the transcripts make use of consistent English orthography (as opposed to being phonetic transcripts). This enables the use of natural language processing techniques that require the reliable identification of words. Grammatical errors, however, remain in the corpus. The corpus includes other annotations such as gesture and dialog acts. Most important for our work are the time spans of word pronunciation, which are precise to the hundredth of a second. We removed interjections, incomplete words, and transcriptions that were still misspelled from the corpus, and we took out all incomplete sentences. This left 951,769 tokens (15,403 types) remaining in the corpus. 4 Semantic surprisal model We make use of a re-implementation of the semantic surprisal model presented in Mitchell et al. (2010). As this paper does not provide a detailed description of how to calculate semantic surprisal, our re-implementation is based on the description in Mitchell’s PhD thesis (2011). In order to calculate surprisal, we need to be able to obtain a good estimate of a word given previous context. Mitchell uses the following concepts in his model: • hn−1 is the history and represents all the previous words in the sentence. If wn is the current word, then hn−1 = w1 . . . wn−1. The vector-space semantic representation of hn−1 1http://groups.inf.ed.ac.uk/ami/ download/ is calculated from the composition of individual word vectors, which we call ⃗hn−1. • context words represent the dimensions of the word vectors. The value of a word vector’s component is the co-occurrence of that word with a context word. The context words consist of the most frequent words in the corpus. • we use word class and distinguish between content words and function words, for which we use open and closed classes as a proxy. 4.1 Computing the vector components The proportion between two probabilities p(ci|w) p(ci) is used for calculating vector components, where ci is the ith context dimension and w is the given word in the current position. We can calculate each vector component vi for a word vector ⃗v according to the following equation: vi = p(ci|w) p(ci) = fciwftotal fwfci (1) where fciw is the cooccurrence frequency of w and ci together, ftotal is the total corpus size, and ci represents the unigram frequencies of w. All future steps in calculating our language model rely on this definition of vi. 4.2 Semantic probabilities For the goal of computing p(w|h), we use the basic idea that the more “semantically coherent” a word is with its history, the more likely it is. Cosine similarity is a common way to define this similarity mathematically in a distributional space, producing a value in the interval [−1, 1]. We use the following definitions, wherein ϕ is the angle between ⃗w and ⃗h: cos(ϕ) = ⃗w · ⃗h |⃗w||⃗h| (2) ⃗w · ⃗h = X i wihi (3) Mitchell notes that there are at least three problems with using cosine similarity in connection with the construction of a probabilistic model: (a) the sum of all cosine values is not unity, (b) word frequency does not pay a role in the calculation, such that a rare synonym of a frequent word might get a high similarity rating, despite low predictability, and (c) the calculation can result in negative values. 765 This problem is addressed by two changes to the notion of dot product used in the calculation of the cosine: ⃗w · ⃗h = X i p(ci|w) p(ci) p(ci|h) p(ci) (4) The influence of word frequencies is then restored using p(w) and p(ci): p(w|h) = p(w) X i p(ci|w) p(ci) p(ci|h) p(ci) p(ci) (5) This expression reweights the new scalar product with the likelihood of the given words and the context words. We refer the reader to Mitchell (2011) in order to see that this is a true probability. The application of Bayes’ Rule allows us to rewrite the formula as p(w|h) = P i p(w|ci)p(ci|h). Nevertheless, equation (5) is better suited to our task, as it operates directly over our word vectors. 4.3 Incremental processing Equation (5) provides a conditional probability for a word w and its history h. To calculate the product p(ci|w) p(ci) p(ci|h) p(ci) , we need the components of the vectors for w and h at the current position in the sentence. We can get ⃗w from directly from the vector space of words. However, ⃗h does not have a direct representation in that space, and it must be constructed compositionally: ⃗h1 = ⃗w1 Initialization (6) ⃗hn = f(⃗hn−1, ⃗wn) Composition (7) f is a vector composition function that can be chosen independently from the model. The history is initialized using the vector of the first word and combined step-by-step with the vectors of the following words. History vectors that arise from the composition step are normalized2: hi = ˆhi P j ˆhjp(cj) Normalization (8) The equations (5), (6), (7), and (8) represent a simple language model, assuming calculation of vector components with equation (1). 2This equation is slightly different from what appears in Mitchell (2011). We present here a corrected formula based on private communication with the author. 4.4 Accounting for word order The model described so far is based on semantic coherence and mostly ignores word order. Consequently, it has poor predictive power. In this section, we describe how a notion of word order is included in the model through the integration of an n-gram language model. Specifically, equation (5) can be represented as the product of two factors: p(w|h) = p(w)∆(w, h) (9) ∆(w, h) = X i p(ci|w) p(ci) p(ci|h) p(ci) p(ci) (10) where ∆is the semantic component that scales p(w) in function of the context. A word w that has a close semantic similarity to a history h should receive higher or lower probability depending on whether ∆is higher or lower than 1. In order to make this into a prediction, p(w) is replaced with a trigram probability. ˆp(wn, hn−1, wn−1 n−2) = p(wn|wn−1 n−2)∆(wn, hn−1) (11) However, this change means that the result is no longer a true probability. Instead, equation 11 can be seen as an estimate of semantic similarity. In order to restore its status as a probability, Mitchell includes another normalization step: p(wn|hn−3, wn−1 n−2) =                      p(wn|wn−1 n−2) Function words ˆp(wn,hn−3,wn−1 n−2) P wc ˆp(wc,hn−3,wn−1 n−2) P wc p(wc|wn−1 n−2) Content words (12) The model hence simply uses the trigram model probability for function words, making the assumption that the distributional representation of such words does not include useful information. On the other hand, content words obtain a portion of the probability mass whose size depends on its similarity estimate ˆp(wn, hn−3, wn−1 n−2) relative to the similarity estimates of all other words P wc ˆp(wc, hn−3, wn−1 n−2). The factor P wc p(wc|wn−1 n−2) ensures that not all of the probability mass is divided up among the content words wc; rather, only the mass assigned by the n-gram model at position wn−1 n−2 is re-distributed. The 766 probability mass of the function words remains unchanged. Mitchell (2011) restricts the history so that only words outside the trigram window are taken into account in order to keep the n-gram model and the semantic similarity model independent. Thus, the n-gram model represents local dependencies, and the semantic model represents longer-distance dependencies. The final model that we use in our experiment consists of equations (1), (6), (7), (8) and (12). 5 Evaluation Methods Our goal is to test whether semantically reweighted surprisal can explain spoken word durations over and above more simple factors that are known to influence word durations, such as word length, frequency and predictability using a simpler language model. Our first experiment tests whether semantic surprisal based on a model trained using in-domain data is predictive of word pronunciation duration, considering the UID hypothesis. For our in-domain model, we estimate surprisal using 10-fold cross-validation over the AMI corpus: we divide the corpus into ten equally-sized segments and produce surprisal values for each word in each segment based on a model trained from the other nine segments. We then use linear mixed effects modeling (LME) via the lme4 package in R (Pinheiro and Bates, 2000; Bates et al., 2014) in order to account for word pronunciation length. We follow the approach of Demberg et al. (2012). Linear mixed effects modelling is a generalization of linear regression modeling and includes both fixed effects and random effects. This is particularly useful when we have a statistical units (e.g., speakers) each with their own set of repeated measures (e.g., word duration), but each such unit has its own particular characteristics (e.g., some speakers naturally speak more slowly than others). These are the random effects. The fixed effects are those characteristics that are expected not to vary across such units. LME modeling learns coefficients for all of the predictors, defining a regression equation that should account for the data in the dependent variable (in our case, word pronunciation duration). The variance in the data that a model cannot explain is referred to as the residual. We denote statistical significances in the following way: *** means a p-value ≤0.001, ** means p ≤ 0.01, * means p ≤0.05, and no stars means that the predictor is not significant (p > 0.05). In our regression models, all the variables are centered and scaled to reduce effects of correlations between predictors. Furthermore, we logtransformed the response variable (actual spoken word durations from the corpus) as well as the duration estimates from the MARY speech synthesis system to obtain more normal distributions, which are prerequisite for applying the LME models. All conclusions drawn here also hold for versions of the model where no log transformation is used. From the AMI corpus, we filter out data points (words) that have a pronunciation duration of zero or those that are longer than two seconds, the latter in order to avoid including such things as pauses for thought. We also remove items that are not represented in Gigaword. That leaves us with 790,061 data points for further analysis. However, in our semantic model, function words are not affected by the ∆semantic similarity adjustment and are therefore not analyzable for the effect of semantically-weighted trigram predictability. That leaves 260k data points for analysis in the models. 6 Baseline model As a first step, we estimate a baseline model which does not include the in-domain semantic surprisal. The response variable in this model are the word durations observed in the corpus. Predictor variables include DMARY (the contextdependent spoken word duration as estimated by the MARY speech synthesis system), word frequency estimates from the same domain as well as the GigaWord corpus (FAMI and FGiga, both as log relative frequencies), the interaction between estimated word durations and in-domain frequency, (DMARY:FAMI) and a domain-general trigram model (SAMI-3). Our model also includes a random intercept for each speaker, as well as random slopes under speaker for DMARY and SAMI-3. The baseline model is shown in Table 1. All predictors in the baseline model shown in Table 1 significantly improve model fit. We can see that the MARY-TTS estimated word durations are a positive highly significant predictor in the model. Furthermore, the word frequency estimates from the domain general corpus as well as the in-domain frequency estimates are significant negative predictors of word durations, this means 767 Predictor Coefficient t-value Sig. (Intercept) 0.034 4.90 *** DMARY 0.427 143.97 *** FAMI -0.137 -60.26 *** FGiga -0.051 -18.92 *** SGiga-3gram 0.032 10.94 *** DMARY:FAMI -0.003 -2.12 * Table 1: Fixed effects of a baseline model including the data points for which we could calculate semantic surprisal. that as expected, words durations are shorter for more frequent words. We can furthermore see that n-gram surprisal is a significant positive predictor of spoken word durations; i.e., more unexpected words have longer durations than otherwise predicted. Finally, there is also a significant interaction between estimated word durations and in-domain word frequency, which means that the duration of long and frequent words is corrected slightly downward. 7 Experiment 1: in-domain model The AMI corpus contains spoken conversations, and is thus quite different from the written corpora we have available. When we train an ngram model in domain (using 10-fold cross validation), perplexities for the in-domain model (67.9) are much lower than for a language model trained on gigaword (359.7), showing that the in-domain model is a better language model for the data3. In order to see the effect of semantic surprisal estimated based on the in-domain language model and reweighted for semantic similarity within the same sentence as described in Section 3, we then expand the baseline model, adding SSemantics as a predictor. Table 2 shows the fixed effects of this expanded model. The predictor for semantic surprisal is significant, but the coefficient is negative. This apparently contradicts our hypothesis that semantic surprisal has a UID effect on pronunciation duration, so that higher SSemantics means higher DAMI. We found that these results are very stable—in particular, the same results also hold if we estimate a separate model with SSemantics as a predictor and residuals of the baseline model as a 3Low perplexity estimates are reflective of the spoken conversational domain. Perplexities on content words are much higher: 357.3 for the in-domain model and 2169.8 for the out of domain model. Predictor Coefficient t-value Sig. (Intercept) 0.031 4.53 ** DMARY 0.428 144.06 *** FAMI -0.148 -59.15 *** FGiga -0.043 -15.10 *** SGiga-3gram 0.047 14.60 *** SSemantics -0.028 -9.78 *** DMARY:FAMI -0.003 -2.27 * Table 2: Fixed effects of the baseline model with semantic surprisal (including also a random slope for semantic surprisal under subject). Figure 1: GAM-calculated spline for SSemantics for the in-domain model. response variable, and when we include in-domain semantic surprisal in a model where there ngram surprisal on the out of domain corpus is not included as a predictor variable. In order to understand the unexpected behaviour of SSemantics, we make use of a generalized additive model (GAM) with the R package mgcv. Compared to LME models, GAMs are parameter-free and do not assume a linear form of the predictors. Instead, for every predictor, GAMs can fit a spline. We learn a GAM using the residuals of the baseline model as a response variable and fitting semantic surprisal based on the in-domain model; see Table 2. In figure 1, we see that SSemantics is poorly fit by a linear function. In particular, there are two intervals in the curve. Between surprisal values 0 768 and 1.5, the curve falls, but between 1.5 and 4, it rises. (For high surprisal values, there are too few data points from which to draw conclusions.) Therefore, we decided to divide the data up into datapoints with SSemantics above 1.5 and below 1.5. We then modelled the effect of SSemantics on the residuals of the baseline model, with SSemantics as a random effect. This is to remove a possible effect of collinearity between SSemantics and the other predictors. Interval of Predictor Coef. t-value Sig. SSemantics [0, ∞[ (Intercept) 0 0 SSemantics -0.013 -7.01 *** [0, 1.5[ (Intercept) 0 0 SSemantics -0.06 -18.56 *** [1.5, ∞[ (Intercept) 0 0 SSemantics 0.013 5.50 *** Table 3: Three models of SSemantics as a random effect over the residuals of baseline models learned from the remaining fixed effects. The first model is over the entire range. Table 3 shows that the random effect of semantic surprisal is positive and significant in the range of semantic surprisal above 1.5. That low surprisals have the opposite effect compared to what we expect suggests to us that using the AMI corpus as an in-domain source of training data presents a problem. The observed result for the relationship between semantic surprisal and spoken word durations does not only hold for the semantic surprisal model, but also for the standard non-weight-adjusted in-domain trigram model. We therefore hypothesize that our semantic surprisal model is producing surprisal values that are low because they are common in this domain (both higher frequency and higher similarities), but speakers are coming to the AMI task with “models” trained on out-of-domain data. Thus, words that are apparently very low-surprisal display longer pronunciation durations as an artifact of the model. To test this, we conducted a second experiment, for which we built a model with outof-domain data. 8 Experiment 2: out-of-domain training In order to test for the effect of possible underestimation of surprisal due to in-domain training, we also tested the semantic surprisal model when trained on more domain-general text. As training data for our semantic model, we use a randomly selected 1% (by sentence) of the English Gigaword 5.0 corpus. This is lowercased, with hapax legomena treated as unknown words. We test the model against the entire AMI corpus. Furthermore, we also compare our semantic surprisal values to the syntactic surprisal values calculated by Demberg et al. (2012) for the AMI corpus, which we obtained from the authors. As noted above, the out-of-domain language model has higher perplexity on the AMI corpus—that is, it is a lowerperforming language model. On the other hand, it may represent overall speaker experience more accurately than the in-domain model; in other words, it may be a better model of the speaker. 8.1 Results Once again, the semantic surprisal model is only different from a general n-gram model on content words. We therefore first compare whether the model that is reweighted for semantic surprisal can explain more of the variance than the same model without semantic reweighting. We again use the same baseline model as for the in-domain experiment, see table 1. As the semantic surprisal model represents a reweighted trigram model, there is a high correlation between the trigram model and the semantic surprisal model. We thus need to know whether the semantically reweighted model is better than the simple trigram model. When we compare a model that contains both trigram surprisal and semantic surprisal as a predictor, we find that this model is significantly better than the model including only trigram surprisal (AIC of baseline model: 618427; AIC of model with semantic surprisal: 618394; χ2 = 35.8; p < 0.00001). On the other hand, the model including both predictors is only marginally better than the model including semantic surprsial (AIC of semantic surprisal model: 618398). This means that the simpler trigram surprisal model does not contribute anything over the semantic model, and that the semantic model fits the word duration data better. Table 4 shows the model with semantic surprisal as a predictor. Furthermore, we wanted to check whether our hypothesis about the negative result for the indomain model was indeed due to an underestimation of surprisal of in-domain words for the 769 Predictor Coefficient t-value Sig. (Intercept) 0.034 4.90 *** DMARY 0.427 144.36 *** FAMI -0.135 -58.76 *** FGiga -0.053 -19.99 *** SSemantics 0.034 11.70 *** DMARY:FAMI -0.003 -2.09 * Table 4: Model of spoken word durations, with random intercept and random slopes for DMARY and SSemantics under speaker. Figure 2: GAM-calculated spline for SSemantics for the ouf-of-domain model. in-domain model. We again calculate a GAM model showing the effect of out-of-domain semantic surprisal in a model containing also the baseline predictors, see figure 2. We can see that word durations increase with increasing semantic surprisal, and that there is in particular no effect of longer word durations for low surprisal words. This result is also confirmed by LME models splitting up the data in small and large surprisal values, as done for the in-domain model in Table 3; semantic surprisal based on the out-of-domain model is a significant positive predictor in both data ranges. Next, we tested whether the semantic similarity model improves model fit over and above a model also containing syntactic surprisal as a predictor. We find that syntactic surprisal improves model fit over and above the model including semantic surPredictor Coefficient t-value Sig. (Intercept) -0.058 -6.58 *** DMARY 0.425 144.04 *** FAMI -0.131 -57.04 *** FGiga -0.051 -19.41 *** SSyntax 0.011 17.61 *** SSemantics 0.015 4.99 *** DMARY:FAMI -0.007 -4.44 *** Table 5: Linear mixed effects model for spoken word durations in the AMI corpus, for a model including both syntactic and semantic surprisal as a predictor as well as a random intercept and slope for DMARY and SSemantics under speaker. prisal (χ2 = 309.5; p < 0.00001), and that semantic surprisal improves model fit over and above a model including syntactic surprisal and trigram surprisal (χ2 = 28.5; p < 0.00001). Table 5 shows the model containing both syntactic based on the Roark parser ((Roark et al., 2009); see also Demberg et al. (2012) for use of syntactic surprisal for estimating spoken word durations) and semantic surprisal. Finally, we split our dataset into data from native and non-native speakers of English (305 native speakers, vs. 376 non-native speakers). Table 6 shows generally larger effects for native than non-native speakers. In particular, the interaction between duration estimates and word frequencies, and semantic surprisal were not significant predictors in the non-native speaker model (however, random slopes for semantic surprisal under speaker still improved model fit very strongly, showing that non-native speakers differ in whether and how they take into account semantic surprisal during language production). 9 Discussion Our analysis shows that high information density at one linguistic level of description (for example, syntax or semantics) can lead to a compensatory effect at a different linguistic level (here, spoken word durations). Our data also shows however, that the choice of training data for the models is important. A language model trained exclusively in a specific domain, while a good language model, may not be representative of speaker’s overall language experience. This is particularly relevant for the AMI corpus, in which groups of 770 Native Speaker Non-native Speaker Predictor Coefficient t-value Sig. Coefficient t-value Sig. (Intercept) -0.1706 -13.76 *** 0.035 3.42 *** DMARY 0.4367 105.43 *** 0.415 104.09 *** FAMI -0.1407 -42.54 *** -0.122 -38.66 *** FGiga -0.0421 -11.07 *** -0.063 -18.70 *** SSyntax 0.0132 14.22 *** 0.009 11.96 *** SSemantics 0.0246 5.89 *** *** DMARY:FAMI -0.0139 -6.12 *** *** Table 6: Linear mixed effects models for spoken word durations in the AMI corpus, for native as well as non-native speakers of English separately. The models include both syntactic and semantic surprisal as a fixed effect, and a random intercept and slope for DMARY and SSemantics under speaker. researchers are discussing the design of a remote control, but where it is not necessarily the case that these people discuss remote controls very frequently. Furthermore, none of the speakers were present in the whole corpus, and most of the > 600 speakers participated only in very few meetings. This means that the in-domain language model strongly over-estimates people’s familiarity with the domain. Words that are highly predictable for the indomain model (but which are not highly predictable in general) were not pronounced faster, as evident in our first analysis. When semantic surprisal is however estimated based on a more domain-general text like Gigaword, we find a significant positive effect of semantic surprisal on spoken word durations across the complete spectrum from very predictable to unpredictable words. These results also point to an interesting scientific question: to what extent to people use their domain-general model for adapting their language and speech production in a specific situation, and to what extent do they use a domainspecific model for adaptation? Do people adapt during a conversation, such that in-domain models would be more relevant for language production in situations where speakers are more versed in the domain? 10 Conclusions and future work We have described a method by which it is possible to connect a semantic level of representation (estimated using a distributional model) to observations about speech patterns at the word level. From a language science or psycholinguistic perspective, we have shown that semantic surprisal affects spoken word durations in natural conversational speech, thus providing additional supportive evidence for the uniform information density hypothesis. In particular, we find evidence that UID effects connect linguistic levels of representation, providing more information about the architecture of the human processor or generator. This work also has implications for designers of speech synthesis systems: our results point towards using high-level information about the rate of information transfer measured in terms of surprisal for estimating word durations in order to make artificial word pronunciation systems sound more natural. Finally, the strong effect of training data domain raises scientific questions about how speakers use domain-general and -specific knowledge in communicative cooperation with listeners at the word pronunciation level. One possible next step would be to expand this work to more complex semantic spaces which include stronger notions of compositionality, semantic roles, and so on, such as the distributional approaches of Baroni and Lenci (2010), Sayeed and Demberg (2014), and Greenberg et al. (2015) that contain grammatical information but rely on vector operations. Acknowledgements This research was funded by the German Research Foundation (DFG) as part of SFB 1102 “Information Density and Linguistic Encoding”. 771 References Aylett, M. and Turk, A. (2006). Language redundancy predicts syllabic duration and the spectral characteristics of vocalic syllable nuclei. The Journal of the Acoustical Society of America, 119(5):3048–3058. Baroni, M. and Lenci, A. (2010). Distributional memory: A general framework for corpusbased semantics. Comput. Linguist., 36(4):673– 721. Bates, D., M¨achler, M., Bolker, B. M., and Walker, S. C. (2014). Fitting linear mixed-effects models using lme4. ArXiv e-print; submitted to Journal of Statistical Software. Carletta, J. (2007). Unleashing the killer corpus: experiences in creating the multi-everything AMI meeting corpus. Language Resources and Evaluation, 41(2):181–190. Demberg, V. and Keller, F. (2008). Data from eye-tracking corpora as evidence for theories of syntactic processing complexity. Cognition, 109(2):193–210. Demberg, V., Sayeed, A., Gorinski, P., and Engonopoulos, N. (2012). Syntactic surprisal affects spoken word duration in conversational contexts. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 356–367, Jeju Island, Korea. Association for Computational Linguistics. Erk, K. (2012). Vector space models of word meaning and phrase meaning: A survey. Language and Linguistics Compass, 6(10):635– 653. Frank, A. F. and Jaeger, T. F. (2008). Speaking rationally: Uniform information density as an optimal strategy for language production. In Love, B. C., McRae, K., and Sloutsky, V. M., editors, Proceedings of the 30th Annual Conference of the Cognitive Science Society, pages 939–944. Cognitive Science Society. Frank, S. L., Otten, L. J., Galli, G., and Vigliocco, G. (2013). Word surprisal predicts n400 amplitude during reading. In ACL (2), pages 878– 883. Greenberg, C., Sayeed, A., and Demberg, V. (2015). Improving unsupervised vector-space thematic fit evaluation via role-filler prototype clustering. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics Human Language Technologies (NAACL HLT). Hale, J. (2001). A probabilistic Earley parser as a psycholinguistic model. In Proceedings of the Second Meeting of the North American Chapter of the Association for Computational Linguistics on Language Technologies, NAACL ’01, pages 1–8, Stroudsburg, PA, USA. Association for Computational Linguistics. Harris, Z. S. (1954). Distributional structure. Word, 10(2-3):146–162. Jurafsky, D., Bell, A., Gregory, M., and Raymond, W. D. (2001). Probabilistic relations between words: Evidence from reduction in lexical production. Typological studies in language, 45:229–254. Levy, R. (2008). Expectation-based syntactic comprehension. Cognition, 106(3):1126–1177. Mitchell, J., Lapata, M., Demberg, V., and Keller, F. (2010). Syntactic and semantic factors in processing difficulty: An integrated measure. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 196–206. Association for Computational Linguistics. Mitchell, J. J. (2011). Composition in distributional models of semantics. PhD thesis, The University of Edinburgh. Pinheiro, J. C. and Bates, D. M. (2000). MixedEffects Models in S and S-PLUS. Statistics and Computing. Springer. Roark, B., Bachrach, A., Cardenas, C., and Pallier, C. (2009). Deriving lexical and syntactic expectation-based measures for psycholinguistic modeling via incremental top-down parsing. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 324–333, Singapore. Association for Computational Linguistics. Sayeed, A. and Demberg, V. (2014). Combining unsupervised syntactic and semantic models of thematic fit. In Proceedings of the first Italian Conference on Computational Linguistics (CLiC-it 2014). Shannon, C. E. (1948). A mathematical theory of communication. Bell System Technical Journal, 27(379-423):623–656. 772 Smith, N. J. and Levy, R. (2013). The effect of word predictability on reading time is logarithmic. Cognition, 128(3):302–319. Turney, P. D. and Pantel, P. (2010). From frequency to meaning: Vector space models of semantics. Journal of Artificial Intelligence Research, 37:141–188. 773
2015
74
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 774–784, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Efficient Methods for Inferring Large Sparse Topic Hierarchies Doug Downey, Chandra Sekhar Bhagavatula, Yi Yang Electrical Engineering and Computer Science Northwestern University [email protected],{csb,yiyang}@u.northwestern.edu Abstract Latent variable topic models such as Latent Dirichlet Allocation (LDA) can discover topics from text in an unsupervised fashion. However, scaling the models up to the many distinct topics exhibited in modern corpora is challenging. “Flat” topic models like LDA have difficulty modeling sparsely expressed topics, and richer hierarchical models become computationally intractable as the number of topics increases. In this paper, we introduce efficient methods for inferring large topic hierarchies. Our approach is built upon the Sparse Backoff Tree (SBT), a new prior for latent topic distributions that organizes the latent topics as leaves in a tree. We show how a document model based on SBTs can effectively infer accurate topic spaces of over a million topics. We introduce a collapsed sampler for the model that exploits sparsity and the tree structure in order to make inference efficient. In experiments with multiple data sets, we show that scaling to large topic spaces results in much more accurate models, and that SBT document models make use of large topic spaces more effectively than flat LDA. 1 Introduction Latent variable topic models, such as Latent Dirichlet Allocation (Blei et al., 2003), are popular approaches for automatically discovering topics in document collections. However, learning models that capture the large numbers of distinct topics expressed in today’s corpora is challenging. While efficient methods for learning large topic models have been developed (Li et al., 2014; Yao et al., 2009; Porteous et al., 2008), these methods have focused on “flat” topic models such as LDA. Flat topic models over large topic spaces are prone to overfitting: even in a Web-scale corpus, some words are expressed rarely, and many documents are brief. Inferring a large topic distribution for each word and document given such sparse data is challenging. As a result, LDA models in practice tend to consider a few thousand topics at most, even when training on billions of words (Mimno et al., 2012). A promising alternative to flat topic models is found in recent hierarchical topic models (Paisley et al., 2015; Blei et al., 2010; Li and McCallum, 2006; Wang et al., 2013; Kim et al., 2013; Ahmed et al., 2013). Topics of words and documents can be naturally arranged into hierarchies. For example, an article on the topic of the Chicago Bulls is also relevant to the more general topics of NBA, Basketball, and Sports. Hierarchies can combat data sparsity: if data is too sparse to place the term “Pau Gasol” within the Chicago Bulls topic, perhaps it can be appropriately modeled at somewhat less precision within the Basketball topic. A hierarchical model can make fine-grained distinctions where data is plentiful, and back-off to more coarse-grained distinctions where data is sparse. However, current hierarchical models are hindered by computational complexity. The existing inference methods for the models have runtimes that increase at least linearly with the number of topics, making them intractable on large corpora with large numbers of topics. In this paper, we present a hierarchical topic model that can scale to large numbers of distinct topics. Our approach is built upon a new prior for latent topic distributions called a Sparse Backoff Tree (SBT). SBTs organize the latent topics as leaves in a tree, and smooth the distributions for each topic with those of similar topics nearby in the tree. SBT priors use absolute discounting and learned backoff distributions for 774 smoothing sparse observation counts, rather than the fixed additive discounting utilized in Dirichlet and Chinese Restaurant Process models. We show how the SBT’s characteristics enable a novel collapsed sampler that exploits the tree structure for efficiency, allowing SBT-based document models (SBTDMs) that scale to hierarchies of over a million topics. We perform experiments in text modeling and hyperlink prediction, and find that SBTDM is more accurate compared to LDA and the recent nested Hierarchical Dirichlet Process (nHDP) (Paisley et al., 2015). For example, SBTDMs with a hundred thousand topics achieve perplexities 28-52% lower when compared with a standard LDA configuration using 1,000 topics. We verify that the empirical time complexity of inference in SBTDM increases sub-linearly in the number of topics, and show that for large topic spaces SBTDM is more than an order of magnitude more efficient than the hierarchical Pachinko Allocation Model (Mimno et al., 2007) and nHDP. Lastly, we release an implementation of SBTDM as open-source software.1 2 Previous Work The intuition in SBTDM that topics are naturally arranged in hierarchies also underlies several other models from previous work. Paisley et al. (2015) introduce the nested Hierarchical Dirichlet Process (nHDP), which is a tree-structured generative model of text that generalizes the nested Chinese Restaurant Process (nCRP) (Blei et al., 2010). Both the nCRP and nHDP model the tree structure as a random variable, defined over a flexible (potentially infinite in number) topic space. However, in practice the infinite models are truncated to a maximal size. We show in our experiments that SBTDM can scale to larger topic spaces and achieve greater accuracy than nHDP. To our knowledge, our work is the first to demonstrate a hierarchical topic model that scales to more than one million topics, and to show that the larger models are often much more accurate than smaller models. Similarly, compared to other recent hierarchical models of text and other data (Petinot et al., 2011; Wang et al., 2013; Kim et al., 2013; Ahmed et al., 2013; Ho et al., 2010), our focus is on scaling to larger data sets and topic spaces. 1http://websail.cs.northwestern.edu/ projects/sbts/ The Pachinko Allocation Model (PAM) introduced by Li & McCallum (Li and McCallum, 2006) is a general approach for modeling correlations among topic variables in latent variable models. Hierarchical organizations of topics, as in SBT, can be considered as a special case of a PAM, in which inference is particularly efficient. We show that our model is much more efficient than an existing PAM topic modeling implementation in Section 5. Hu and Boyd-Graber (2012) present a method for augmenting a topic model with known hierarchical correlations between words (taken from e.g. WordNet synsets). By contrast, our focus is on automatically learning a hierarchical organization of topics from data, and we demonstrate that this technique improves accuracy over LDA. Lastly, SparseLDA (Yao et al., 2009) is a method that improves the efficiency of inference in LDA by only generating portions of the sampling distribution when necessary. Our collapsed sampler for SBTDM utilizes a related intuition at each level of the tree in order to enable fast inference. 3 Sparse Backoff Trees In this section, we introduce the Sparse Backoff Tree, which is a prior for a multinomial distribution over a latent variable. We begin with an example showing how an SBT transforms a set of observation counts into a probability distribution. Consider a latent variable topic model of text documents, similar to LDA (Blei et al., 2003) or pLSI (Hofmann, 1999). In the model, each token in a document is generated by first sampling a discrete latent topic variable Z from a document-specific topic distribution, and then sampling the token’s word type from a multinomial conditioned on Z. We will focus on the document’s distribution over topics, ignoring the details of the word types for illustration. We consider a model with 12 latent topics, denoted as integers from the set {1, . . . , 12}. Assume we have assigned latent topic values to five tokens in the document, specifically the topics {1, 4, 4, 5, 12}. We indicate the number of times topic value z has been selected as nz (Figure 1). Given the five observations, the key question faced by the model is: what is the topic distribution over a sixth topic variable from the same document? In the case of the Dirichlet prior utilized for the topic distribution in LDA, the probability 775 2 4 5 6 7 8 9 10 11 12 nz = 1 0 0 2 1 0 0 0 0 0 0 1 P(Z|S, n)  0.46 0.36 0.36 1.56 0.56 0.46 0.14 0.14 0.14 0.24 0.24 0.34  = 0.24 3 1 z  = 0.36  = 0.30  = 0.30  = 0.30  = 0.30  = 0.36 Figure 1: An example Sparse Backoff Tree over 12 latent variable values. that the sixth topic variable has value z is proportional to nz + α, where α is a hyperparameter of the model. SBT differs from LDA in that it organizes the topics into a tree structure in which the topics are leaves (see Figure 1). In this paper, we assume the tree structure, like the number of latent topics, is manually selected in advance. With an SBT prior, the estimate of the probability of a topic z is increased when nearby topics in the tree have positive counts. Each interior node a of the SBT has a discount δa associated with it. The SBT transforms the observation counts nz into pseudocounts (shown in the last row in the figure) by subtracting δa from each non-zero descendent of each interior node a, and re-distributing the subtracted value uniformly among the descendants of a. For example, the first state has a total of 0.90 subtracted from its initial count n1 = 1, and then receives 0.30/3 from its parent, 1.08/6 from its grandparent, and 0.96/12 from the root node for a total pseudo-count of 0.46. The document’s distribution over a sixth topic variable is then proportional to these pseudo-counts. When each document tends to discuss a set of related topics, the SBT prior will assign a higher likelihood to the data when related topics are located nearby in the tree. Thus, by inferring latent variable values to maximize likelihood, nearby leaves in the tree will come to represent related topics. SBT, unlike LDA, encodes the intuition that a topic becomes more likely in a document that also discusses other, related topics. In the example, the pseudo-count the SBT produces for topic six (which is related to other topics that occur in the document) is almost three times larger than that of topic eight, even though the observation counts are zero in each case. In LDA, topics six and eight would have equal pseudo-counts (proportional to α). 3.1 Definitions Let Z be a discrete random variable that takes integer values in the set {1, . . . , L}. Z is drawn from a multinomial parameterized by a vector θ of length L. Definition 1 A Sparse Backoff Tree SBT(T , δθ, Q(z)) for the discrete random variable Z consists of a rooted tree T containing L leaves, one for each value of Z; a coefficient δa > 0 for each interior node a of T ; and a backoff distribution Q(z). Figure 1 shows an example SBT. The example includes simplifications we also utilize in our experiments, namely that all nodes at a given depth in the tree have the same number of children and the same δ value. However, the inference techniques we present in Section 4 are applicable to any tree T and set of coefficients {δa}. For a given SBT S, let ∆S(z) indicate the sum of all δa values for all ancestors a of leaf node z (i.e., all interior nodes on the path from the root to z). For example, in the figure, ∆S(z) = 0.90 for all z. This amount is the total absolute discount that the SBT applies to the random variable value z. We define the SBT prior implicitly in terms of the posterior distribution it induces on a random variable Z drawn from a multinomial θ with an SBT prior, after θ is integrated out. Let the vector n = [n1, . . . , nL] denote the sufficient statistics for any given observations drawn from θ, where nz is the number of times value z has been observed. Then, the distribution over a subsequent draw of Z 776 given SBT prior S and observations n is defined as: P(Z = z|S, n) ≡ (1) max(nz −∆S(z), 0) + B(S, z, n)Q(z) K(S, P i ni) where K(S, P i ni) is a normalizing constant that ensures the distribution sums to one for any fixed number of observations P i ni, and B(S, z, n) and Q(z) are defined as below. The quantity B(S, z, n) expresses how much of the discounts from all other leaves z′ contribute to the probability of z. For an interior node a, let desc(a) indicate the number of leaves that are descendants of a, and let desc+(a) indicate the number of leaf descendants z of a that have non-zero values nz. Then the contribution of the discount δa of node a to each of its descendent leaves is b(a, n) = δadesc+(a)/desc(a). We then define B(S, z, n) to be the sum of b(a, n) over all interior nodes a on the path from the root to z. The function Q(z) is a backoff distribution. It allows the portion of the discount probability mass that is allocated to z to vary with a proposed distribution Q(z). This is useful because in practice the SBT is used as a prior for a conditional distribution, for example the distribution P(Z|w) over topic Z given a word w in a topic model of text. In that case, an estimate of P(Z|w) for a rare word w may be improved by “mixing in” the marginal topic distribution Q(z) = P(Z = z), analogous to backoff techniques in language modeling. Our document model described in the following section utilizes two different Q functions, one uniform (Q(z) = 1/T) and another related to the marginal topic distribution P(z). 4 The SBT Document Model We now present the SBT document model, a probabilistic latent variable model of text documents that utilizes SBT priors. We then provide a collapsed sampler for the model that exploits the tree structure to make inference more efficient. Our document model follows the Latent Dirichlet Allocation (LDA) Model (Blei et al., 2003), illustrated graphically in Figure 2 (left). In LDA, a corpus of documents is generated by sampling a topic distribution θd for each document d, and also a distribution over words φz for each topic. Then, in document d each token w is generated by first sampling a topic z from the multinomial P(Z|θd), and then sampling w from the multinomial P(W|Z, φz). The SBTDM is the same as LDA, with one significant difference. In LDA, the parameters θ and φ are sampled from two Dirichlet priors, with separate hyperparameters α and β. In SBTDM, the parameters are instead sampled from particular SBT priors. Specifically, the generative model is: θ ∼ SBT(T , δθ, Qθ(z) = 1/T) φ′ ∼ SBT(T , δφ, Qφ(z) = P ∗(z)) λ ∼ Dirichlet(α′) Z|θ ∼ Discrete(θ) W|z, φ′, λ ∼ Discrete(λφ′ .,z/P(z|φ′)) The variable φ′ represents the distribution of topics given words, P(Z|W). The SBTDM samples a distribution φ′ w over topics for each word type w in the vocabulary (of size V ). In SBTDM, the random variable φ′ w has dimension L, rather than V for φz as in LDA. We also draw a prior word frequency distribution, λ = {λw} for each word w. 2 We then apply Bayes Rule to obtain the conditional distributions P(W|Z, φ′) required for inference. The expression λφ′ .,z/P(z|φ′) denotes the normalized element-wise product of two vectors of length V : the prior distribution λ over words, and the vector of probabilities P(z|w) = φ′ w,z over words w for the given topic z. The SBT priors for φ′ and θ share the same tree structure T , which is fixed in advance. The SBTs have different discount factors, denoted by the hyperparameters δθ and δφ. Finally, the backoff distribution for θ is uniform, whereas φ’s backoff distribution P ∗is defined below. 4.1 Backoff distribution P ∗(z) SBTDM requires choosing a backoff distribution P ∗(z) for φ′. As we now show, it is possible to select a natural backoff distribution P ∗(z) that enables scalable inference. Given a set of observations n, we will set P ∗(z) proportional to P(z|Sφ, n). This is a recursive definition, because P(z|Sφ, n) depends on P ∗(z). Thus, we define: P ∗(z) ≡ P w max(nw z −∆S(z), 0) C −P w Bw(Sφ, z, n) (2) 2The word frequency distribution does not impact the inferred topics (because words are always observed), and in our experiments we simply use maximum likelihood estimates for λw (i.e., setting α′ to be negligibly small). Exploring other word frequency distributions is an item of future work. 777 𝜃 𝑊  𝑍 𝛼 𝛽 𝐷 𝑁 𝐿 𝜃 ′ 𝛿𝜃 𝐷 𝑁 𝛿 𝑊 𝑍 𝑉 𝛼′ 𝜆 Figure 2: The Latent Dirichlet Allocation Model (left) and our SBT Document Model (right). where C > P w Bw(Sφ, z, n) is a hyperparameter, nw z is the number of observations of topic z for word w in n, and Bw indicates the function B(Sφ, z, n) defined in Section 3.1, for the particular word w. That is, P w Bw(Sφ, z, n) is the total quantity of smoothing distributed to topic z across all words, before the backoff distribution P ∗(z) is applied. The form of P ∗(z) has two key advantages. The first is that setting P ∗(z) proportional to the marginal topic probability allows SBTDM to back-off toward marginal estimates, a successful technique in language modeling (Katz, 1987) (where it has been utilized for word probabilities, rather than topic probabilities). Secondly, setting the backoff distribution in this way allows us to simplify inference, as described below. 4.2 Inference with Collapsed Sampling Given a corpus of documents D, we infer the values of the hidden variables Z using the collapsed Gibbs sampler popular in Latent Dirichlet Allocation models (Griffiths and Steyvers, 2004). Each variable Zi is sampled given the settings of all other variables (denoted as n−i): P(Zi = z|n−i, D) ∝P(z|n−i, T , δθ)· P(wi|z, n−i, T , δφ) (3) The first term on the right-hand side is given by Equation 1. The second can be rewritten as: P(wi|z, n−i, T , δφ) = P(z, wi|n−i, T , δφ) P(z|n−i, T , δφ) (4) 4.3 Efficient Inference Implementation The primary computational cost when scaling to large topic spaces involves constructing the sampling distribution. Both LDA with collapsed sampling and SBTDM share an advantage in space Algorithm 1 Compute the sampling distribution for a product of two multinomials with SBT priors with Q(z) = 1 function INTERSECT(SBT Node ar, SBT Node al) if ar, al are leaves then τ(i) ←τ(ar)τ(al) return i end if i.r ←ar r(i) ←b(al) ∗τ(ar) i.l ←al ; b(i.l) ←0 l(i) ←b(ar) ∗τ(al) −b(ar)b(al)desc(ar) τ(i)+ = r(i) + l(i) for all child c non-zero for ar and al do ic ←INTERSECT(ar.c, al.c) i.children += ic τ(i) += τ(ic) end for return i end function complexity: the model parameters are specified by a sparse set of non-zero counts denoting how often tokens of each word or document are assigned to each topic. However, in general the sampling distribution for SBTDM has non-uniform probabilities for each of L different latent variable values. Thus, even if many parameters are zero, a naive approach that computes the complete sampling distribution will still take time linear in L. However, in SBTs the sampling distribution can be constructed efficiently using a simple recursive algorithm that exploits the structure of the tree. The result is an inference algorithm that often requires far less than linear time in L, as we verify in our experiments. First, we note that P(z, wi|n−i, T , δφ) is proportional to the sum of two quantities: the discounted count max(nz −∆S, 0) and the smoothing probability mass B(S, z, n)Q(z). By choosing Q(z) = P ∗(z), we can be ensured that P ∗(z) normalizes this sum. Thus, the backoff distri778 bution cancels through the normalization. This means we can normalize the SBT for φ′ in advance by scaling the non-zero counts by a factor of 1/P ∗(z), and then at inference time we need only multiply pointwise two multinomials with SBT priors and uniform backoff distributions. The intersection of two multinomials drawn from SBT priors with uniform backoff distributions can be performed efficiently for sparse trees. The algorithm relies on two quantities defined for each node of each tree. The first, b(a, n), was defined in Section 3. It denotes the smoothing that the interior node a distributes (uniformly) to each of its descendent leaves. We denote b(a, n) as b(a) in this section for brevity. The second quantity, τ(a), expresses the total count mass of all leaf descendants of a, excluding the smoothing from ancestors of a. With the quantities b(a) and τ(a) for all a, we can efficiently compute the sampling distribution of the product of two SBT-governed multinomials (which we refer to as an SBTI). The method is shown in Algorithm 1. It takes two SBT nodes as arguments; these are corresponding nodes from two SBT priors that share the same tree structure T . It returns an SBTI, a data structure representing the sampling distribution. The efficiency of Algorithm 1 is reflected in the fact that the algorithm only recurses for child nodes c with non-zero τ(c) for both of the SBT node arguments. Because such cases will be rare for sparse trees, often Algorithm 1 only needs to traverse a small portion of the original SBTs in order to compute the sampling distribution exactly. Our experiments illustrate the efficiency of this algorithm in practice. Finally, we can efficiently sample from either an SBTI or a single SBT-governed multinomial. The sampling methods are straightforward recursive methods, supplied in Algorithms 2 and 3. Algorithm 2 Sample(SBT Node a) procedure SAMPLE(SBT Node a) if a is a leaf then return a end if Sample from {b(a)desc(a), τ(a) −b(a)desc(a)}. if back-off distribution b(a)desc(a) selected then return Uniform[a’s descendents] else Sample a’s child c ∼τ(c) return SAMPLE(c) end if end procedure Algorithm 3 Sampling from an SBTI function SAMPLE(SBTI Node i) if i is a leaf then return i end if Sample from {r(i), l(i), τ(i) −r(i) −l(i)} if right distribution r(i) selected then return SAMPLE(i.r) else if left distribution l(i) selected then return SAMPLE(i.l) else Sample i’s child c ∼τ(c) return SAMPLE(c) end if end if end function 4.4 Expansion Much of the computational expense encountered in inference with SBTDM occurs shortly after initialization. After a slow first several sampling passes, the conditional distributions over topics for each word and document become concentrated on a sparse set of paths through the SBT. From that point forward, sampling is faster and requires much less memory. We utilize the hierarchical organization of the topic space in SBTs to side-step this computational complexity by adding new leaves to the SBTs of a trained SBTDM. This is a “coarseto-fine” (Petrov and Charniak, 2011) training approach that we refer to as expansion. Using expansion, the initial sampling passes of the larger model can be much more time and space efficient, because they leverage the already-sparse structure of the smaller trained SBTDM. Our expansion method takes as input an inferred sampling distribution n for a given tree T . The algorithm adds k new branches to each leaf of T to obtain a larger tree T ′. We then transform the sampling state by replacing each ni ∈n with one of its children in T ′. For example, in Figure 1, expanding with k = 3 would result in a new tree containing 36 topics, and the single observation of topic 4 in T would be re-assigned randomly to one of the topics {10, 11, 12} in T ′. 5 Experiments We now evaluate the efficiency and accuracy of SBTDM. We evaluate SBTs on two data sets, the RCV1 Reuters corpus of newswire text (Lewis et al., 2004), and a distinct data set of Wikipedia links, WPL. We consider two disjoint subsets of RCV1, a small training set (RCV1s) and a larger 779 training set (RCV1). We compare the accuracy and efficiency of SBTDM against flat LDA and two existing hierarchical models, the Pachinko Allocation Model (PAM) and nested Hierarchical Dirichlet Process (nHDP). To explore how the SBT structure impacts modeling performance, we experiment with two different SBTDM configurations. SBTDM-wide is a shallow tree in which the branching factor increases from the root downward in the sequence 3, 6, 6, 9, 9, 12, 12. Thus, the largest model we consider has 3·6·6·9·9·12·12 = 1,259,712 distinct latent topics. SBTDM-tall has lower branching factors of either 2 or 3 (so in our evaluation its depth ranges from 3 to 15). As in SBTDM-wide, in SBTDM-tall the lower branching factors occur toward the root of the tree. We vary the number of topics by considering balanced subtrees of each model. For nHDP, we use the same tree structures as in SBT-wide. In preliminary experiments, using the tall structure in nHDP yielded similar accuracy but was somewhat slower. Similar to our LDA implementation, SBTDM optimizes hyperparameter settings as sampling proceeds. We use local beam search to choose new hyperparameters that maximize leave-oneout likelihood for the distributions P(Z|d) and P(Z|w) on the training data, evaluating the parameters against the current state of the sampler. We trained all models by performing 100 sampling passes through the full training corpus (i.e., approximately 10 billion samples for RCV1, and 8 billion samples for WPL). We evaluate performance on held-out test sets of 998 documents for RCV1 (122,646 tokens), and 200 documents for WPL (84,610 tokens). We use the left-to-right algorithm (Wallach et al., 2009) over a randomized word order with 20 particles to compute perplexity. We re-optimize the LDA hyperparameters at regular intervals during sampling. 5.1 Small Corpus Experiments We begin with experiments over a small corpus to highlight the efficiency advantages of SBTDM. Data Set Tokens Vocabulary Documents RCV1s 2,669,093 46,130 22,149 RCV1 101,184,494 283,911 781,262 WPL 82,154,551 1,141,670 199,000 Table 1: Statistics of the three training corpora. As argued above, existing hierarchical models require inference that becomes expensive as the topic space increases in size. We illustrate this by comparing our model with PAM and nHDP. We also compare against a fast “flat” LDA implementation, SparseLDA, from the MALLET software package (McCallum, 2002). For SBTDM we utilize a parallel inference approach, sampling all variables using a fixed estimate of the counts n, and then updating the counts after each full sampling pass (as in (Wang et al., 2009)). The SparseLDA and nHDP implementations are also parallel. All parallel methods use 15 threads. The PAM implementation provided in MALLET is single-threaded. Our efficiency measurements are shown in Figure 3. We plot the mean wall-clock time to perform 100 sampling passes over the RCV1s corpus, starting from randomly initialized models (i.e. without expansion in SBTDM). For the largest plotted topic sizes for PAM and nHDP, we estimate total runtime using a small number of iterations. The results show that SBTDM’s time to sample increases well below linear in the number of topics. Both SBTDM methods have runtimes that increase at a rate substantially below that of the square root of the number of topics (plotted as a blue line in the figure for reference). For the largest numbers of topics in the plot, when we increase the number of topics by a factor of 12, the time to sample increases by less than a factor of 1.7 for both SBT configurations. We also evaluate the accuracy of the models on the small corpus. We do not compare against PAM, as the MALLET implementation lacks a method for computing perplexity for a PAM model. The results are shown in Table 3. The SBT models tend to achieve lower perplexity than LDA, and SBTDM-tall performs slightly better than SBTDM-wide for most topic sizes. The best model, SBT-wide with 8,748 topics, achieves perplexity 14% lower than the best LDA model and 2% lower than the best SBTDM-tall model. The LDA model overfits for the largest topic configuration, whereas at that size both SBT models remain at least as accurate as any of the LDA models in Table 3. We also evaluated using the topic coherence measure from (Mimno et al., 2011), which reflects how well the learned topics reflect word cooccurrence statistics in the training data. Follow780 Figure 3: Time (in seconds) to perform a sampling pass over the RCV1s corpus as number of topics varies, plotted on a log-log scale. The SBT models scale sub-linearly in the number of topics. ing recent experiments with the measure (Stevens et al., 2012), we use ϵ = 10−12 pseudo-cooccurrences of each word pair and we evaluate the average coherence using the top 10 words for each topic. Table 2 shows the results. PAM, LDA, and nHDP have better coherence at smaller topic sizes, but SBT maintains higher coherence as the number of topics increases. Topics LDA PAM nHDP SBTDM SBTDM -wide -tall 18 -420.8 -421.2 -422.9 -444.3 -440.2 108 -434.8 -430.9 -554.3 -445.4 -445.8 972 -451.2 -548.1 -443.3 -443.8 8748 -615.3 -444.3 -444.1 Table 2: Average topic coherence on the small RCV1s corpus. 5.1.1 Evaluating Expansion The results discussed above do not utilize expansion in SBTDM. To evaluate expansion, we performed separate experiments in which we expanded a 972-topic model trained on RCV1s to initialize a 8,748-topic model. Compared to random initialization, expansion improved efficiency and accuracy. Inference in the expanded model executed approximately 30% faster and used 70% less memory, and the final 8,748-topic models had approximately 10% lower perplexity. 5.2 Large Corpus Results Our large corpus experiments are reported in Table 4. Here, we compare the test set perplexity of a single model for each topic size and model type. We focus on SBTDM-tall for the large corpora. We utilize expansion (see Section 4.4) for SBTDM-tall models with more than a thousand topics on each data set. The results show that on both data sets, SBTDM-tall utilizes larger numbers of topics more effectively. On RCV1, LDA improves only marginally between 972 and 8,748 topics, whereas SBTDM-tall improves dramatically. For 8,748 topics, SBTDM-tall achieves a perplexity score 17% lower than LDA model on RCV1, and 29% lower on WPL. SBT improves even further in larger topic configurations. Training and testing LDA with our implementation using over a hundred thousand topics was not tractable on our data sets due to space complexity (the MALLET implementation exceeded our maximum 250G of heap space). As discussed above, expansion enables SBTDM to dramatically reduce space complexity for large topic spaces. The results highlight the accuracy improvements found from utilizing larger numbers of topics than are typically used in practice. For example, an SBTDM with 104,976 topics achieves perplexity 28-52% lower when compared with a standard LDA configuration using only 1,000 topics. RCV1 WPL # Topics LDA SBTDM-tall LDA SBTDM-tall 108 1,121 1,148 7,049 7,750 972 820 841 2,598 2,095 8,748 772 637 1,730 1,236 104,976 593 1,242 1,259,712 626 Table 4: Model accuracy on large corpora (corpus perplexity measure). The SBT model utilizes larger numbers of topics more effectively. 5.3 Exploring the Learned Topics Lastly, we qualitatively examine whether the SBTDM’s learned topics reflect meaningful hierarchical relationships. From an SBTDM of 104,976 topics trained on the Wikipedia links data set, we examined the first 108 leaves (these are contained in a single subtree of depth 5). 760 unique terms (i.e. Wikipedia pages) had positive counts for the topics, and 500 of these terms were related to radio stations. The leaves appear to encode fine-grained subcategorizations of the terms. In Figure 4, we provide examples from one subtree of six topics (topics 13-18). For each topic t, we list the top three 781 Number of Topics Model 18 108 972 8,748 104,976 LDA 1420 (16.3) 1016 (9.8) 844 (1.8) 845 (3.3) 1058 (4.1) nHDP 1433 (19.6) 1446 (53.3) 1583 (157.7) SBTDM-wide 1510 (31.5) 1091 (31.8) 797 (3.5) 723 (18.2) 844 (60.1) SBTDM-tall 1480 (13.5) 1051 (9.1) 787 (10.5) 736 (3.2) 776 (14.1) Table 3: Small training corpus (RCV1s) performance. Shown is perplexity averaged over three runs for each method and number of topics, with standard deviation in parens. Both SBTDM models achieve lower perplexity than LDA and nHDP for the larger numbers of topics. Radio Stations T16 WNFZ WIMZ-FM WDXI TN stations T17 WSCW WQMA KPGM WV, MS, OK stations T18 WQSE WHMT WIGH TN stations T13 WVCB WWIL_(AM) WCRU NC Christian AM stations T14 WOWZ WHEE WYBT VA and FL stations T15 WRJD WTIK WYMY NC Spanish AM stations … … … … … … … … … … … … … … … … … … … Figure 4: An example of topics from a 104,976topic SBTDM defined over Wikipedia pages. terms w (ranked by symmetric conditional probability, P(w|t)P(t|w)), and a specific categorization that applies to the three terms. Interestingly, as shown in the figure, the top terms for the six topics we examined were all four-character “call letters” for particular radio stations. Stations with similar content or in nearby locations tend to cluster together in the tree. For example, the two topics focused on radio stations in Tennessee (TN) share the same parent, as do the topics focused on North Carolina (NC) AM stations. More generally, all six topics focus on radio stations in the southern US. Figure 5 shows a different example, from a model trained on the RCV1 corpus. In this example, we first select only those terms that occur at least 2,000 times in the corpus and have a statistical association with their topic that exceeds a threshold, and we again rank terms by symmetric conditional probability. The 27-topic subtree detailed in the figure appears to focus on terms from major storylines in United States politics in early 1997, including El Ni˜no, Lebanon, White House Press Secretary Mike McCarry, and the Senate confirmation hearings of CIA Director nominee Tony Lake. … T2166 El T2171 Lebanese Beirut Lebanon pound T2173 Lebanese T2160 El drought T2161 Western resource T2163 Western resource … … … … … … … … … … T2178 House White McCurry T2181 Lake Herman T2183 nomination Senate T2184 White Clinton McCurry House T2185 Bill T2186 CIA intelligence Lake T2168 El … … … … … … … … … Figure 5: An example of topics from an 8,748topic SBTDM defined over the RCV1 corpus. 6 Conclusion and Future Work We introduced the Sparse Backoff Tree (SBT), a prior for latent topic distributions that organizes the latent topics as leaves in a tree. We presented and experimentally analyzed a document model based on the SBT, called an SBTDM. The SBTDM was shown to utilize large topic spaces more effectively than previous techniques. There are several directions of future work. One limitation of the current work is that the SBT is defined only implicitly. We plan to investigate explicit representations of the SBT prior or related variants. Other directions include developing methods to learn the SBT structure from data, as well as applying the SBT prior to other tasks, such as sequential language modeling. Acknowledgments This research was supported in part by NSF grants IIS-1065397 and IIS-1351029, DARPA contract D11AP00268, and the Allen Institute for Artificial Intelligence. We thank the anonymous reviews for their helpful comments. 782 References [Ahmed et al.2013] Amr Ahmed, Liangjie Hong, and Alexander Smola. 2013. Nested chinese restaurant franchise process: Applications to user tracking and document modeling. In Proceedings of the 30th International Conference on Machine Learning (ICML-13), pages 1426–1434. [Blei et al.2003] David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993– 1022. [Blei et al.2010] David M Blei, Thomas L Griffiths, and Michael I Jordan. 2010. The nested chinese restaurant process and bayesian nonparametric inference of topic hierarchies. Journal of the ACM (JACM), 57(2):7. [Griffiths and Steyvers2004] Thomas L Griffiths and Mark Steyvers. 2004. Finding scientific topics. Proceedings of the National academy of Sciences of the United States of America, 101(Suppl 1):5228– 5235. [Ho et al.2010] Qirong Ho, Ankur P Parikh, Le Song, and Eric P Xing. 2010. Infinite hierarchical mmsb model for nested communities/groups in social networks. arXiv preprint arXiv:1010.1868. [Hofmann1999] Thomas Hofmann. 1999. Probabilistic latent semantic indexing. In Proceedings of the 22nd annual international ACM SIGIR conference on Research and development in information retrieval, pages 50–57. ACM. [Hu and Boyd-Graber2012] Yuening Hu and Jordan Boyd-Graber. 2012. Efficient tree-based topic modeling. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2, pages 275–279. Association for Computational Linguistics. [Katz1987] Slava Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. Acoustics, Speech and Signal Processing, IEEE Transactions on, 35(3):400–401. [Kim et al.2013] Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A hierarchical aspect-sentiment model for online reviews. In Proceedings of AAAI. [Lewis et al.2004] David D Lewis, Yiming Yang, Tony G Rose, and Fan Li. 2004. Rcv1: A new benchmark collection for text categorization research. The Journal of Machine Learning Research, 5:361–397. [Li and McCallum2006] Wei Li and Andrew McCallum. 2006. Pachinko allocation: Dag-structured mixture models of topic correlations. In Proceedings of the 23rd International Conference on Machine Learning, ICML ’06, pages 577–584, New York, NY, USA. ACM. [Li et al.2014] Aaron Q Li, Amr Ahmed, Sujith Ravi, and Alexander J Smola. 2014. Reducing the sampling complexity of topic models. In Proceedings of the 20th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 891– 900. ACM. [McCallum2002] Andrew Kachites McCallum. 2002. Mallet: A machine learning for language toolkit. http://mallet.cs.umass.edu. [Mimno et al.2007] David Mimno, Wei Li, and Andrew McCallum. 2007. Mixtures of hierarchical topics with pachinko allocation. In Proceedings of the 24th international conference on Machine learning, pages 633–640. ACM. [Mimno et al.2011] David Mimno, Hanna M Wallach, Edmund Talley, Miriam Leenders, and Andrew McCallum. 2011. Optimizing semantic coherence in topic models. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 262–272. Association for Computational Linguistics. [Mimno et al.2012] David Mimno, Matt Hoffman, and David Blei. 2012. Sparse stochastic inference for latent dirichlet allocation. arXiv preprint arXiv:1206.6425. [Paisley et al.2015] J. Paisley, C. Wang, D.M. Blei, and M.I. Jordan. 2015. Nested hierarchical dirichlet processes. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 37(2):256–270, Feb. [Petinot et al.2011] Yves Petinot, Kathleen McKeown, and Kapil Thadani. 2011. A hierarchical model of web summaries. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 670–675. Association for Computational Linguistics. [Petrov and Charniak2011] Slav Petrov and Eugene Charniak. 2011. Coarse-to-fine natural language processing. Springer Science & Business Media. [Porteous et al.2008] Ian Porteous, Arthur Asuncion, David Newman, Padhraic Smyth, Alexander Ihler, and Max Welling. 2008. Fast collapsed gibbs sampling for latent dirichlet allocation. In In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 569–577. [Stevens et al.2012] Keith Stevens, Philip Kegelmeyer, David Andrzejewski, and David Buttler. 2012. Exploring topic coherence over many models and many topics. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 952–961. Association for Computational Linguistics. 783 [Wallach et al.2009] Hanna M Wallach, Iain Murray, Ruslan Salakhutdinov, and David Mimno. 2009. Evaluation methods for topic models. In Proceedings of the 26th Annual International Conference on Machine Learning, pages 1105–1112. ACM. [Wang et al.2009] Yi Wang, Hongjie Bai, Matt Stanton, Wen-Yen Chen, and Edward Y Chang. 2009. Plda: Parallel latent dirichlet allocation for large-scale applications. In Algorithmic Aspects in Information and Management, pages 301–314. Springer. [Wang et al.2013] Chi Wang, Marina Danilevsky, Nihit Desai, Yinan Zhang, Phuong Nguyen, Thrivikrama Taula, and Jiawei Han. 2013. A phrase mining framework for recursive construction of a topical hierarchy. In Proceedings of the 19th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’13, pages 437–445, New York, NY, USA. ACM. [Yao et al.2009] Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining, pages 937–946. ACM. 784
2015
75
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 785–794, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Trans-dimensional Random Fields for Language Modeling Bin Wang1, Zhijian Ou1, Zhiqiang Tan2 1Department of Electronic Engineering, Tsinghua University, Beijing 100084, China 2Department of Statistics, Rutgers University, Piscataway, NJ 08854, USA [email protected], [email protected], [email protected] Abstract Language modeling (LM) involves determining the joint probability of words in a sentence. The conditional approach is dominant, representing the joint probability in terms of conditionals. Examples include n-gram LMs and neural network LMs. An alternative approach, called the random field (RF) approach, is used in whole-sentence maximum entropy (WSME) LMs. Although the RF approach has potential benefits, the empirical results of previous WSME models are not satisfactory. In this paper, we revisit the RF approach for language modeling, with a number of innovations. We propose a trans-dimensional RF (TDRF) model and develop a training algorithm using joint stochastic approximation and trans-dimensional mixture sampling. We perform speech recognition experiments on Wall Street Journal data, and find that our TDRF models lead to performances as good as the recurrent neural network LMs but are computationally more efficient in computing sentence probability. 1 Introduction Language modeling is crucial for a variety of computational linguistic applications, such as speech recognition, machine translation, handwriting recognition, information retrieval and so on. It involves determining the joint probability p(x) of a sentence x, which can be denoted as a pair x = (l, xl), where l is the length and xl = (x1, . . . , xl) is a sequence of l words. Currently, the dominant approach is conditional modeling, which decomposes the joint probability of xl into a product of conditional probabilities 1 1And the joint probability of x is modeled as p(x) = by using the chain rule, p(x1, . . . , xl) = lY i=1 p(xi|x1, . . . , xi−1). (1) To avoid degenerate representation of the conditionals, the history of xi, denoted as hi = (x1, · · · , xi−1), is reduced to equivalence classes through a mapping φ(hi) with the assumption p(xi|hi) ≈p(xi|φ(hi)). (2) Language modeling in this conditional approach consists of finding suitable mappings φ(hi) and effective methods to estimate p(xi|φ(hi)). A classic example is the traditional n-gram LMs with φ(hi) = (xi−n+1, . . . , xi−1). Various smoothing techniques are used for parameter estimation (Chen and Goodman, 1999). Recently, neural network LMs, which have begun to surpass the traditional n-gram LMs, also follow the conditional modeling approach, with φ(hi) determined by a neural network (NN), which can be either a feedforward NN (Schwenk, 2007) or a recurrent NN (Mikolov et al., 2011). Remarkably, an alternative approach is used in whole-sentence maximum entropy (WSME) language modeling (Rosenfeld et al., 2001). Specifically, a WSME model has the form: p(x; λ) = 1 Z exp{λT f(x)} (3) Here f(x) is a vector of features, which can be arbitrary computable functions of x, λ is the corresponding parameter vector, and Z is the global normalization constant. Although WSME models have the potential benefits of being able to naturally express sentence-level phenomena and integrate features from a variety of knowledge p(xl)p(⟨EOS⟩|xl), where ⟨EOS⟩is a special token placed at the end of every sentence. Thus the distribution of the sentence length is implicitly modeled. 785 sources, their performance results ever reported are not satisfactory (Rosenfeld et al., 2001; Amaya and Bened´ı, 2001; Ruokolainen et al., 2010). The WSME model defined in (3) is basically a Markov random field (MRF). A substantial challenge in fitting MRFs is that evaluating the gradient of the log likelihood requires high-dimensional integration and hence is difficult even for moderately sized models (Younes, 1989), let alone the language model (3). The sampling methods previously tried for approximating the gradient are the Gibbs sampling, the Independence MetropolisHasting sampling and the importance sampling (Rosenfeld et al., 2001). Simple applications of these methods are hardly able to work efficiently for the complex, high-dimensional distribution such as (3), and hence the WSME models are in fact poorly fitted to the data. This is one of the reasons for the unsatisfactory results of previous WSME models. In this paper, we propose a new language model, called the trans-dimensional random field (TDRF) model, by explicitly taking account of the empirical distributions of lengths. This formulation subsequently enables us to develop a powerful Markov chain Monte Carlo (MCMC) technique, called trans-dimensional mixture sampling and then propose an effective training algorithm in the framework of stochastic approximation (SA) (Benveniste et al., 1990; Chen, 2002). The SA algorithm involves jointly updating the model parameters and normalization constants, in conjunction with trans-dimensional MCMC sampling. Section 2 and 3 present the model definition and estimation respectively. Furthermore, we make several additional innovations, as detailed in Section 4, to enable successful training of TDRF models. First, the diagonal elements of hessian matrix are estimated during SA iterations to rescale the gradient, which significantly improves the convergence of the SA algorithm. Second, word classing is introduced to accelerate the sampling operation and also improve the smoothing behavior of the models through sharing statistical strength between similar words. Finally, multiple CPUs are used to parallelize the training of our RF models. In Section 5, speech recognition experiments are conducted to evaluate our TDRF LMs, compared with the traditional 4-gram LMs and the recurrent neural network LMs (RNNLMs) (Mikolov et al., 2011) which have emerged as a new stateof-art of language modeling. We explore the use of a variety of features based on word and class information in TDRF LMs. In terms of word error rates (WERs) for speech recognition, our TDRF LMs alone can outperform the KN-smoothing 4gram LM with 9.1% relative reduction, and perform comparably to the RNNLM with a slight 0.5% relative reduction. To our knowledge, this result represents the first strong empirical evidence supporting the power of using the whole-sentence language modeling approach. Our open-source TDRF toolkit is released publicly 2. 2 Model Definition Throughout, we denote 3 by xl = (x1, . . . , xl) a sentence (i.e., word sequence) of length l ranging from 1 to m. Each element of xl corresponds to a single word. For l = 1, . . . , m, we assume that sentences of length l are distributed from an exponential family model: pl(xl; λ) = 1 Zl(λ)eλT f(xl), (4) where f(xl) = (f1(xl), f2(xl), . . . , fd(xl))T is the feature vector and λ = (λ1, λ2, . . . , λd)T is the corresponding parameter vector, and Zl(λ) is the normalization constant: Zl(λ) = X xl eλT f(xl) (5) Moreover, we assume that length l is associated with probability πl for l = 1, . . . , m. Therefore, the pair (l, xl) is jointly distributed as p(l, xl; λ) = πl pl(xl; λ). (6) We provide several comments on the above model definition. First, by making explicit the role of lengths in model definition, it is clear that the model in (6) is a mixture of random fields on sentences of different lengths (namely on subspaces of different dimensions), and hence will be called a trans-dimensional random field (TDRF). Different from the WSME model (3), a crucial aspect of the TDRF model (6) is that the mixture weights πl can be set to the empirical length probabilities in the training data. The WSME 2http://oa.ee.tsinghua.edu.cn/ ˜ouzhijian/software.htm 3We add sup or subscript l, e.g. in xl, pl(), to make clear that the variables and distributions depend on length l. 786 model (3) is essentially also a mixture of RFs, but the mixture weights implied are proportional to the normalizing constants Zl(λ): p(l, xl; λ) = Zl(λ) Z(λ) 1 Zl(λ)eλT f(xl), (7) where Z(λ) = Pm l=1 Zl(λ). A motivation for proposing (6) is that it is very difficult to sample from (3), namely (7), as a mixture distribution with unknown weights which typically differ from each other by orders of magnitudes, e.g. 1040 or more in our experiments. Setting mixture weights to the known, empirical length probabilities enables us to develop a very effective learning algorithm, as introduced in Section 3. Basically, the empirical weights serve as a control device to improve sampling from multiple distributions (Liang et al., 2007; Tan, 2015) . Second, it can be shown that if we incorporate the length features 4 in the vector of features f(x) in (3), then the distribution p(x; λ) in (3) under the maximum entropy (ME) principle will take the form of (6) and the probabilities (π1, . . . , πm) in (6) implied by the parameters for the length features are exactly the empirical length probabilities. Third, a feature fi(xl), 1 ≤i ≤d, can be any computable function of the sentence xl, such as n-grams. In our current experiments, the features fi(xl) and their corresponding parameters λi are defined to be position-independent and lengthindependent. For example, fi(xl) = P k fi(xl, k), where fi(xl, k) is a binary function of xl evaluated at position k. As a result, the feature fi(xl) takes values in the non-negative integers. 3 Model Estimation We develop a stochastic approximation algorithm using Markov chain Monte Carlo to estimate the parameters λ and the normalization constants Z1(λ), ..., Zm(λ) (Benveniste et al., 1990; Chen, 2002). The core algorithms newly designed in this paper are the joint SA for simultaneously estimating parameters and normalizing constants (Section 3.2) and trans-dimensional mixture sampling (Section 3.3) which is used as Step I of the joint SA. The most relevant previous works that we borrowed from are (Gu and Zhu, 2001) on SA for fitting a single RF, (Tan, 2015) on sampling and 4The length feature corresponding to length l is a binary feature that takes one if the sentence x is of length l, and otherwise takes zero. estimating normalizing constants from multiple RFs of the same dimension, and (Green, 1995) on trans-dimensional MCMC. 3.1 Maximum likelihood estimation Suppose that the training dataset consists of nl sentences of length l for l = 1, . . . , m. First, the maximum likelihood estimate of the length probability πl is easily shown to be nl/n, where n = Pm l=1 nl. By abuse of notation, we set πl = nl/n hereafter. Next, the log-likelihood of λ given the empirical length probabilities is L(λ) = 1 n m X l=1 X xl∈Dl log pl(xl; λ), (8) where Dl is the collection of sentences of length l in the training set. By setting to 0 the derivative of (8) with respect to λ, we obtain that the maximum likelihood estimate of λ is determined by the following equation: ∂L(λ) ∂λ = ˜p[f] −pλ[f] = 0, (9) where ˜p[f] is the expectation of the feature vector f with respect to the empirical distribution: ˜p[f] = 1 n m X l=1 X xl∈Dl f(xl), (10) and pλ[f] is the expectation of f with respect to the joint distribution (6) with πl = nl/n: pλ[f] = m X l=1 nl n pλ,l[f], (11) and pλ,l[f] = P xl f(xl)pl(xl; λ). Eq.(9) has the form of equating empirical expectations ˜p[f] with theoretical expectations pλ[f], as similarly found in maximum likelihood estimation of single random field models. 3.2 Joint stochastic approximation Training random field models is challenging due to numerical intractability of the normalizing constants Zl(λ) and expectations pλ,l[f]. We propose a novel SA algorithm for estimating the parameters λ by (9) and, simultaneously, estimating the log ratios of normalization constants: ζ∗ l (λ) = log Zl(λ) Z1(λ), l = 1, . . . , m (12) 787 Algorithm 1 Joint stochastic approximation Input: training set 1: set initial values λ(0) = (0, . . . , 0)T and ζ(0) = ζ∗(λ(0)) −ζ∗ 1(λ(0)) 2: for t = 1, 2, . . . , tmax do 3: set B(t) = ∅ 4: set (L(t,0), X(t,0)) = (L(t−1,K), X(t−1,K)) Step I: MCMC sampling 5: for k = 1 →K do 6: sampling (See Algorithm 3) (L(t,k), X(t,k)) = SAMPLE(L(t,k−1), X(t,k−1)) 7: set B(t) = B(t) ∪{(L(t,k), X(t,k))} 8: end for Step II: SA updating 9: Compute λ(t) based on (14) 10: Compute ζ(t) based on (15) and (16) 11: end for where Z1(λ) is chosen as the reference value and can be calculated exactly. The algorithm can be obtained by combining the standard SA algorithm for training single random fields (Gu and Zhu, 2001) and a trans-dimensional extension of the self-adjusted mixture sampling algorithm (Tan, 2015). Specifically, consider the following joint distribution of the pair (l, xl): p(l, xl; λ, ζ) ∝πl eζl eλT f(xl), (13) where πl is set to nl/n for l = 1, . . . , m, but ζ = (ζ1, . . . , ζm)T with ζ1 = 0 are hypothesized values of the truth ζ∗(λ) = (ζ∗ 1(λ), . . . , ζ∗ m(λ))T with ζ∗ 1(λ) = 0. The distribution p(l, xl; λ, ζ) reduces to p(l, xl; λ) in (6) if ζ were identical to ζ∗(λ). In general, p(l, xl; λ, ζ) differs from p(l, xl; λ) in that the marginal probability of length l is not necessarily πl. The joint SA algorithm, whose pseudo-code is shown in Algorithm 1, consists of two steps at each time t as follows. Step I: MCMC sampling. Generate a sample set B(t) with p(l, xl; λ(t−1), ζ(t−1)) as the stationary distribution (see Section 3.3). Step II: SA updating. Compute λ(t) = λ(t−1) + γλ ( ˜p[f] − P (l,xl)∈B(t) f(xl) K ) (14) where γλ is a learning rate of λ; compute ζ(t−1 2 ) = ζ(t) + γζ δ1(B(t)) π1 , . . . , δm(B(t)) πm  (15) ζ(t) = ζ(t−1 2 ) −ζ (t−1 2 ) 1 (16) where γζ is a learning rate of ζ, and δl(B(t)) is the relative frequency of length l appearing in B(t): δl(B(t)) = P (j,xj)∈B(t) 1(j = l) K . (17) The rationale in (15) is to adjust ζ based on how the relative frequencies of lengths δl(B(t)) are compared with the desired length probabilities πl. Intuitively, if the relative frequency of some length l in the sample set B(t) is greater (or respectively smaller) than the desired length probability πl, then the hypothesized value ζ(t−1) l is an underestimate (or overestimate) of ζ∗ l (λ(t−1)) and hence should be increased (or decreased). Following Gu & Zhu (2001) and Tan (2015), we set the learning rates in two stages: γλ = ( t−βλ if t ≤t0 1 t−t0+t βλ 0 if t > t0 (18) γζ = ( (0.1t)−βζ if t ≤t0 1 0.1(t−t0)+(0.1t0)βζ if t > t0 (19) where 0.5 < βλ, βζ < 1. In the first stage (t ≤t0), a slow-decaying rate of t−β is used to introduce large adjustments. This forces the estimates λ(t) and ζ(t) to fall reasonably fast into the true values. In the second stage (t > t0), a fast-decaying rate of t−1 is used. The iteration number t is multiplied by 0.1 in (19), to make the the learning rate of ζ decay more slowly than λ. Commonly, t0 is selected to ensure there is no more significant adjustment observed in the first stage. 3.3 Trans-dimensional mixture sampling We describe a trans-dimensional mixture sampling algorithm to simulate from the joint distribution p(l, xl; λ, ζ), which is used with (λ, ζ) = (λ(t−1), ζ(t−1)) at time t for MCMC sampling in the joint SA algorithm. The name “mixture sampling” reflects the fact that p(l, xl; λ, ζ) represents a labeled mixture, because l is a label indicating that xl is associated with the distribution pl(xl; ζ). With fixed (λ, ζ), this sampling algorithm can be seen as formally equivalent to reversible jump MCMC (Green, 1995), which was originally proposed for Bayes model determination. The trans-dimensional mixture sampling algorithm consists of two steps at each time t: local jump between lengths and Markov move of sentences for a given length. In the following, we denote by L(t−1) and X(t−1) the length and sequence 788 before sampling, but use the short notation (λ, ζ) for (λ(t−1), ζ(t−1)). Step I: Local jump. The Metropolis-Hastings method is used in this step to sample the length. Assuming L(t−1) = k, first we draw a new length j ∼Γ(k, ·). The jump distribution Γ(k, l) is defined to be uniform at the neighborhood of k : Γ(k, l) =          1 3, if k ∈[2, m −1], l ∈[k −1, k + 1] 1 2, if k = 1, l ∈[1, 2] or k = m, l ∈[m −1, m] 0, otherwise (20) where m is the maximum length. Eq.(20) restricts the difference between j and k to be no more than one. If j = k, we retain the sequence and perform the next step directly, i.e. set L(t) = k and X(t) = X(t−1). If j = k + 1 or j = k −1, the two cases are processed differently. If j = k + 1, we first draw an element (i.e., word) Y from a proposal distribution: Y ∼ gk+1(y|X(t−1)). Then we set L(t) = j (= k + 1) and X(t) = {X(t−1), Y } with probability min  1, Γ(j, k) Γ(k, j) p(j, {X(t−1), Y }; λ, ζ) p(k, X(t−1); λ, ζ)gk+1(Y |X(t−1))  (21) where {X(t−1), Y } denotes a sequence of length k + 1 whose first k elements are X(t−1) and the last element is Y . If j = k −1, we set L(t) = j (= k −1) and X(t) = X(t−1) 1:j with probability min ( 1, Γ(j, k) Γ(k, j) p(j, X(t−1) 1:j ; λ, ζ)gk(X(t−1) k |X(t−1) 1:j ) p(k, X(t−1); λ, ζ) ) (22) where X(t−1) 1:j is the first j elements of X(t−1) and X(t−1) k is the kth element of X(t−1). In (21) and (22), gk+1(y|xk) can be flexibly specified as a proper density function in y. In our application, we find the following choice works reasonably well: gk+1(y|xk) = p(k + 1, {xk, y}; λ, ζ) P w p(k + 1, {xk, w}; λ, ζ). (23) Step II: Markov move. After the step of local jump, we obtain X(t) =      X(t−1) if L(t) = k {X(t−1), Y } if L(t) = k + 1 X(t−1) 1:k−1 if L(t) = k −1 (24) Then we perform Gibbs sampling on X(t), from the first element to the last element (Algorithm 2) Algorithm 2 Markov Move 1: for i = 1 →L(t) do 2: draw W ∼p(L(t), {X(t) 1:i−1, w, X(t) i+1:L(t)}; λ, ζ) 3: set X(t) i = W 4: end for 4 Algorithm Optimization and Acceleration The joint SA algorithm may still suffer from slow convergence, especially when λ is highdimensional. We introduce several techniques for improving the convergence of the algorithm and reducing computational cost. 4.1 Improving SA recursion We propose two techniques to effectively improve the convergence of SA recursion. The first technique is to incorporate Hessian information, similarly as in related works on stochastic approximation (Gu and Zhu, 2001) and stochastic gradient descent algorithms (Byrd et al., 2014). But we only use the diagonal elements of the Hessian matrix to re-scale the gradient, due to high-dimensionality of λ. Taking the second derivatives of L(λ) yields Hi = −d2L(λ) dλ2 i = p[f2 i ] − m X l=1 πl(pl[fi])2 (25) where Hi denotes the ith diagonal element of Hessian matrix. At time t, before updating the parameter λ (Step II in Section 3.2), we compute H (t−1 2 ) i = 1 K X (l,xl)∈B(t) fi(xl)2 − m X l=1 πl(¯pl[fi])2, (26) H(t) i = H(t−1) i + γH(H (t−1 2 ) i −H(t−1) i ), (27) where ¯pl[fi] = |B(t) l |−1 P (l,xl)∈B(t) l fi(xl), and B(t) l is the subset, of size |B(t) l |, containing all sentences of length l in B(t). The second technique is to introduce the “minibatch” on the training set. At each iteration, a subset D(t) of K sentences are randomly selected from the training set. Then the gradient is approximated with the overall empirical expectation ˜p[f] being replaced by the empirical expectation over the subset D(t). This technique is reminiscent of stochastic gradient descent using a random subsample of training data to achieve fast convergence 789 0 20 40 60 80 100 120 140 160 180 200 t/10 − log−likelihood without hessian with hessian (a) 0 500 1000 1500 2000 50 100 150 200 t/10 negative log−likelihood Hessian+mini−batch Hessian (b) Figure 1: Examples of convergence curves on training set after introducing hessian and training set mini-batching. of optimization algorithms (Bousquet and Bottou, 2008). By combining the two techniques, we revise the updating equation (14) of λ to λ(t) i = λ(t−1) i + γλ max(H(t) i , h) × (P (l,xl)∈D(t) fi(xl) K − P (l,xl)∈B(t) fi(xl) K ) (28) where 0 < h < 1 is a threshold to avoid H(t) i being too small or even zero. Moreover, a constant tc is added to the denominator of (18), to avoid too large adjustment of λ, i.e. γλ = ( 1 tc+tβλ if t ≤t0, 1 tc+t−t0+t βλ 0 if t > t0. (29) Fig.1(a) shows the result after introducing hessian estimation, and Fig.1(b) shows the effect of training set mini-batching. 4.2 Sampling acceleration For MCMC sampling in Section 3.3, the Gibbs sampling operation of drawing X(t) i (Step 2 in Algorithms 2) involves calculating the probabilities of all the possible elements in position i. This is computationally costly, because the vocabulary size |V| is usually 10 thousands or more in practice. As a result, the Gibbs sampling operation presents a bottleneck limiting the efficiency of sampling algorithms. We propose a novel method of using class information to effectively reduce the computational cost of Gibbs sampling. Suppose that each word in vocabulary V is assigned to a single class. If the total class number is |C|, then there are, on average, |V|/|C| words in each class. With the class information, we can first draw the class of X(t) i , denoted by c(t) i , and then draw a word Algorithm 3 Class-based MCMC sampling 1: function SAMPLE((L(t−1), X(t−1))) 2: set k = L(t−1) 3: init (L(t), X(t)) = (k, X(t−1)) Step I: Local jump 4: generate j ∼Γ(k, ·) (Eq.(20)) 5: if j = k + 1 then 6: generate C ∼Qk+1(c) 7: generate Y ∼˘gk+1(y|X(t−1), C) (Eq.31) 8: set L(t) = j and X(t) = {X(t−1), Y } with probability (Eq.21) and (Eq.32) 9: end if 10: if j = k −1 then 11: set L(t) = j and X(t) = X(t−1) 1:k−1 with probability Eq.(22) and (Eq.32) 12: end if Step II: Markov move 13: for i = 1 →L(t) do 14: draw C ∼Qi(c) 15: set c(t) i = C with probability (Eq.30) 16: draw W ∈Vc(t) i 17: set X(t) i = W 18: end for 19: return (L(t), X(t)) 20: end function belonging to class c(t) i . The computational cost is reduced from |V| to |C| + |V|/|C| on average. The idea of using class information to accelerate training has been proposed in various contexts of language modeling, such as maximum entropy models (Goodman, 2001b) and RNN LMs (Mikolov et al., 2011). However, the realization of this idea is different for training our models. The pseudo-code of the new sampling method is shown in Algorithm 3. Denote by Vc the subset of V containing all the words belonging to class c. In the Markov move step (Step 13 to 18 in Algorithm 3), at each position i, we first generate a class C from a proposal distribution Qi(c) and then accept C as the new c(t) i with probability min ( 1, Qi(c(t) i ) Qi(C) pi(C) pi(c(t) i ) ) (30) where pi(c) = X w∈Vc p(L(t), {X(t) 1:i−1, w, X(t) i+1:L(t)}; λ, ζ). The probabilities Qi(c) and pi(c) depend on {X(t) 1:i−1, X(t) i+1:L(t)}, but this is suppressed in the notation. Then we normalize the probabilities of words belonging to class c(t) i and draw a word as the new X(t) i from the class c(t) i . Similarly, in the local jump step with k = L(t−1), if the proposal j = k + 1 (Step 5 to 9 790 in Algorithm 3), we first generate C ∼Qk+1(c) and then generate Y from class C by ˘gk+1(y|xk, C) = p(k + 1, {xk, y}; λ, ζ) P w∈VC p(k + 1, {xk, w}; λ, ζ) (31) with xk = X(t−1). Then we set L(t) = j and X(t) = {X(t−1), Y } with probability as defined in (21), by setting gk+1(y|xk) = Qk+1(C)˘gk+1(y|xk, C). (32) If the proposal j = k −1, similarly we use acceptance probability (22) with (32). In our application, we construct Qi(c) dynamically as follows. Write xl for {X(t−1), Y } in Step 8 or for X(t) in Step 11 of Algorithm 3. First, we construct a reduced model pc l (xl), by including only the features that depend on xl i through its class and retaining the corresponding parameters in pl(xl; λ, ζ). Then we define the distribution Qi(c) = pc l ({xl 1:i−1, c, xl i+1:l}), which can be directly calculated without knowing the value of xl i. 4.3 Parallelization of sampling The sampling operation can be easily parallelized in SA Algorithm 1. At each time t, both the parameters λ and log normalization constants ζ are fixed at λ(t−1) and ζ(t−1). Instead of simulating one Markov Chain, we simulate J Markov Chains on J CPU cores separately. As a result, to generate a sample set B(t) of size K, only K/J sampling steps need to be performed on each CPU core. By parallelization, the sampling operation is completed J times faster than before. 5 Experiments 5.1 PTB perplexity results In this section, we evaluate the performance of LMs by perplexity (PPL). We use the Wall Street Journal (WSJ) portion of Penn Treebank (PTB). Sections 0-20 are used as the training data (about 930K words), sections 21-22 as the development data (74K) and section 23-24 as the test data (82K). The vocabulary is limited to 10K words, with one special token ⟨UNK⟩denoting words not in the vocabulary. This setting is the same as that used in other studies (Mikolov et al., 2011). The baseline is a 4-gram LM with modified Kneser-Ney smoothing (Chen and Goodman, Type Features w (w−3w−2w−1w0)(w−2w−1w0)(w−1w0)(w0) c (c−3c−2c−1c0)(c−2c−1c0)(c−1c0)(c0) ws (w−3w0)(w−3w−2w0)(w−3w−1w0)(w−2w0) cs (c−3c0)(c−3c−2c0)(c−3c−1c0)(c−2c0) wsh (w−4w0) (w−5w0) csh (c−4c0) (c−5c0) cpw (c−3c−2c−1w0) (c−2c−1w0)(c−1w0) Table 1: Feature definition in TDRF LMs 1999), denoted by KN4. We use the RNNLM toolkit5 to train a RNNLM (Mikolov et al., 2011). The number of hidden units is 250 and other configurations are set by default6. Word classing has been shown to be useful in conditional ME models (Chen, 2009). For our TDRF models, we consider a variety of features as shown in Table 1, mainly based on word and class information. Each word is deterministically assigned to a single class, by running the automatic clustering algorithm proposed in (Martin et al., 1998) on the training data. In Table 1, wi, ci, i = 0, −1, . . . , −5 denote the word and its class at different position offset i, e.g. w0, c0 denotes the current word and its class. We first introduce the classic word/class n-gram features (denoted by “w”/“c”) and the word/class skipping n-gram features (denoted by “ws”/“cs”) (Goodman, 2001a). Second, to demonstrate that long-span features can be naturally integrated in TDRFs, we introduce higher-order features “wsh”/“csh”, by considering two words/classes separated with longer distance. Third, as an example of supporting heterogenous features that combine different information, the crossing features “cpw” (meaning class-predict-word) are introduced. Note that for all the feature types in Table 1, only the features observed in the training data are used. The joint SA (Algorithm 1) is used to train the TDRF models, with all the acceleration methods described in Section 4 applied. The minibatch size K = 300. The learning rates γλ and γζ are configured as (29) and (19) respectively with βλ = βζ = 0.6 and tc = 3000. For t0, it is first initialized to be 104. During iterations, we monitor the smoothed log-likelihood (moving average of 1000 iterations) on the PTB development data. 5http://rnnlm.org/ 6Minibatch size=10, learning rate=0.1, BPTT steps=5. 17 sweeps are performed before stopping, which takes about 25 hours. No word classing is used, since classing in RNNLMs reduces computation but at cost of accuracy. RNNLMs were experimented with varying numbers of hidden units (100500). The best result from using 250 hidden units is reported. 791 models PPL (± std. dev.) KN4 142.72 RNN 128.81 TDRF w+c 130.69±1.64 Table 2: The PPLs on the PTB test data. The class number is 200. We set t0 to the current iteration number once the rising percentage of the smoothed log-likelihoods within 100 iterations is below 20%, and then continue 5000 further iterations before stopping. The configuration of hessian estimation (Section 4.1) is γH = γλ and h = 10−4. L2 regularization with constant 10−5 is used to avoid over-fitting. 8 CPU cores are used to parallelize the algorithm, as described in Section 4.3, and the training of each TDRF model takes less than 20 hours. The perplexity results on the PTB test data are given in Table 2. As the normalization constants of TDRF models are estimated stochastically, we report the Monte Carlo mean and standard deviation from the last 1000 iterations for each PPL. The TDRF model using the basic “w+c” features performs close to the RNNLM in perplexity. To be compact, results with more features are presented in the following WSJ experiment. 5.2 WSJ speech recognition results In this section, we continue to use the LMs obtained above (using PTB training and development data), and evaluate their performance measured by WERs in speech recognition, by rescoring 1000-best lists from WSJ’92 test data (330 sentences). The oracle WER of the 1000-best lists is 3.4%, which are generated from using the Kaldi toolkit7 with a DNN-based acoustic model. TDRF LMs using a variety of features and different number of classes are tested. The results are shown in Table 3. Different types of features, like the skipping features, the higher-order features and the crossing features can all be easily supported in TDRF LMs, and the performance is improved to varying degrees. Particularly, the TDRF using the “w+c+ws+cs+cpw” features with class number 200 performs comparable to the RNNLM in both perplexity and WER. Numerically, the relative reduction is 9.1% compared with the KN4 LMs, and 0.5% compared with the RNN LM. 7http://kaldi.sourceforge.net/ model WER PPL (± std. dev.) #feat KN4 8.71 295.41 1.6M RNN 7.96 256.15 5.1M WSMEs (200c) w+c+ws+cs 8.87 ≈2.8 × 1012 5.2M w+c+ws+cs+cpw 8.82 ≈6.7 × 1012 6.4M TDRFs (100c) w+c 8.56 268.25±3.52 2.2M w+c+ws+cs 8.16 265.81±4.30 4.5M w+c+ws+cs+cpw 8.05 265.63±7.93 5.6M w+c+ws+cs+wsh+csh 8.03 276.90±5.00 5.2M TDRFs (200c) w+c 8.46 257.78±3.13 2.5M w+c+ws+cs 8.05 257.80±4.29 5.2M w+c+ws+cs+cpw 7.92 264.86±8.55 6.4M w+c+ws+cs+wsh+csh 7.94 266.42±7.48 5.9M TDRFs (500c) w+c 8.72 261.02±2.94 2.8M w+c+ws+cs 8.29 266.34±6.13 5.9M Table 3: The WERs and PPLs on the WSJ’92 test data. “#feat” denotes the feature number. Different TDRF models with class number 100/200/500 are reported (denoted by “100c”/“200c”/“500c”) 5.3 Comparison and discussion TDRF vs WSME. For comparison, Table 3 also presents the results from our implementation of the WSME model (3), using the same features as in Table 1. This WSME model is the same as in (Rosenfeld, 1997), but different from (Rosenfeld et al., 2001), which uses the traditional n-gram LM as the priori distribution p0. For the WSME model (3), we can still use a SA training algorithm, similar to that developed in Section 3.2, to estimate the parameters λ. But in this case, there is no need to introduce ζl, because the normalizing constants Zl(λ) are canceled out as seen from (7). Specifically, the learning rate γλ and the L2 regularization are configured the same as in TDRF training. A fixed number of iterations with t0 = 5000 is performed. The total iteration number is 10000, which is similar to the iteration number used in TDRF training. In order to calculate perplexity, we need to estimate the global normalizing constant Z(λ) = Pm l=1 Zl(λ) for the WSME model. Similarly as in (Tan, 2015), we apply the SA algorithm in Section 3.2 to estimate the log normalizing constants ζ, while fixing the parameters λ to be those already estimated from the WSME model and using uniform probabilities πl ≡m−1. The resulting PPLs of these WSME models are extremely poor. The average test log-likelihoods per sentence for these two WSME models are 792 −494 and −509 respectively. However, the WERs from using the trained WSME models in hypothesis re-ranking are not as poor as would be expected from their PPLs. This appears to indicate that the estimated WSME parameters are not so bad for relative ranking. Moreover, when the estimated λ and ζ are substituted into our TDRF model (6) with the empirical length probabilities πl, the “corrected” average test log-likelihoods per sentence for these two sets of parameters are improved to be −152 and −119 respectively. The average test log-likelihoods are both −96 for the two corresponding TDRF models in Table 3. This is some evidence for the model deficiency of the WSME distribution as defined in (3), and introducing the empirical length probabilities gives a more reasonable model assumption. TDRF vs conditional ME. After training, TDRF models are computationally more efficient in computing sentence probability, simply summing up weights for the activated features in the sentence. The conditional ME models (Khudanpur and Wu, 2000; Roark et al., 2004) suffer from the expensive computation of local normalization factors. This computational bottleneck hinders their use in practice (Goodman, 2001b; Rosenfeld et al., 2001). Partly for this reason, although building conditional ME models with sophisticated features as in Table 1 is theoretically possible, such work has not been pursued so far. TDRF vs RNN. The RNN models suffer from the expensive softmax computation in the output layer 8. Empirically in our experiments, the average time costs for re-ranking of the 1000-best list for a sentence are 0.16 sec vs 40 sec, based on TDRF and RNN respectively (no GPU used). 6 Related Work While there has been extensive research on conditional LMs, there has been little work on the whole-sentence LMs, mainly in (Rosenfeld et al., 2001; Amaya and Bened´ı, 2001; Ruokolainen et al., 2010). Although the whole-sentence approach has potential benefits, the empirical results of previous WSME models are not satisfactory, almost the same as traditional n-gram models. After incorporating lexical and syntactic information, a mere relative improvement of 1% and 0.4% 8This deficiency could be partly alleviated with some speed-up methods, e.g. using word clustering (Mikolov, 2012) or noise contrastive estimation (Mnih and Kavukcuoglu, 2013). respectively in perplexity and in WER is reported for the resulting WSEM (Rosenfeld et al., 2001). Subsequent studies of using WSEMs with grammatical features, as in (Amaya and Bened´ı, 2001) and (Ruokolainen et al., 2010), report perplexity improvement above 10% but no WER improvement when using WSEMs alone. Most RF modeling has been restricted to fixeddimensional spaces 9. Despite recent progress, fitting RFs of moderate or large dimensions remains to be challenging (Koller and Friedman, 2009; Mizrahi et al., 2013). In particular, the work of (Pietra et al., 1997) is inspiring to us, but the improved iterative scaling (IIS) method for parameter estimation and the Gibbs sampler are not suitable for even moderately sized models. Our TDRF model, together with the joint SA algorithm and trans-dimensional mixture sampling, are brand new and lead to encouraging results for language modeling. 7 Conclusion In summary, we have made the following contributions, which enable us to successfully train TDRF models and obtain encouraging performance improvement. • The new TDRF model and the joint SA training algorithm, which simultaneously updates the model parameters and normalizing constants while using trans-dimensional mixture sampling. • Several additional innovations including accelerating SA iterations by using Hessian information, introducing word classing to accelerate the sampling operation and improve the smoothing behavior of the models, and parallelization of sampling. In this work, we mainly explore the use of features based on word and class information. Future work with other knowledge sources and largerscale experiments is needed to fully exploit the advantage of TDRFs to integrate richer features. 8 Acknowledgments This work is supported by Toshiba Corporation, National Natural Science Foundation of China (NSFC) via grant 61473168, and Tsinghua Initiative. We thank the anonymous reviewers for helpful comments on this paper. 9Using local fixed-dimensional RFs in sequential models was once explored, e.g. temporal restricted Boltzmann machine (TRBM) (Sutskever and Hinton, 2007). 793 References Fredy Amaya and Jos´e Miguel Bened´ı. 2001. Improvement of a whole sentence maximum entropy language model using grammatical features. In Association for Computational Linguistics (ACL). Albert Benveniste, Michel M´etivier, and Pierre Priouret. 1990. Adaptive algorithms and stochastic approximations. New York: Springer. Olivier Bousquet and Leon Bottou. 2008. The tradeoffs of large scale learning. In NIPS, pages 161–168. Richard H Byrd, SL Hansen, Jorge Nocedal, and Yoram Singer. 2014. A stochastic quasi-newton method for large-scale optimization. arXiv preprint arXiv:1401.7020. Stanley F. Chen and Joshua Goodman. 1999. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13:359– 394. Hanfu Chen. 2002. Stochastic approximation and its applications. Springer Science & Business Media. Stanley F. Chen. 2009. Shrinking exponential language models. In Proc. of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Joshua Goodman. 2001a. A bit of progress in language modeling. Computer Speech & Language, 15:403– 434. Joshua Goodman. 2001b. Classes for fast maximum entropy training. In Proc. of International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Peter J. Green. 1995. Reversible jump markov chain monte carlo computation and bayesian model determination. Biometrika, 82:711–732. Ming Gao Gu and Hong-Tu Zhu. 2001. Maximum likelihood estimation for spatial models by markov chain monte carlo stochastic approximation. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 63:339–355. Sanjeev Khudanpur and Jun Wu. 2000. Maximum entropy techniques for exploiting syntactic, semantic and collocational dependencies in language modeling. Computer Speech & Language, 14:355–372. Daphne Koller and Nir Friedman. 2009. Probabilistic graphical models: principles and techniques. MIT press. Faming Liang, Chuanhai Liu, and Raymond J Carroll. 2007. Stochastic approximation in monte carlo computation. Journal of the American Statistical Association, 102(477):305–320. Sven Martin, J¨org Liermann, and Hermann Ney. 1998. Algorithms for bigram and trigram word clustering. Speech Communication, 24:19–37. Tomas Mikolov, Stefan Kombrink, Lukas Burget, Jan H Cernocky, and Sanjeev Khudanpur. 2011. Extensions of recurrent neural network language model. In Proc. of International Conference on Acoustics, Speech and Signal Processing (ICASSP). Tom´aˇs Mikolov. 2012. Statistical language models based on neural networks. Ph.D. thesis, Brno University of Technology. Yariv Dror Mizrahi, Misha Denil, and Nando de Freitas. 2013. Linear and parallel learning of markov random fields. arXiv preprint arXiv:1308.6342. Andriy Mnih and Koray Kavukcuoglu. 2013. Learning word embeddings efficiently with noise-contrastive estimation. In Neural Information Processing Systems (NIPS). Stephen Della Pietra, Vincent Della Pietra, and John Lafferty. 1997. Inducing features of random fields. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19:380–393. Brian Roark, Murat Saraclar, Michael Collins, and Mark Johnson. 2004. Discriminative language modeling with conditional random fields and the perceptron algorithm. In Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics (ACL), page 47. Ronald Rosenfeld, Stanley F. Chen, and Xiaojin Zhu. 2001. Whole-sentence exponential language models: a vehicle for linguistic-statistical integration. Computer Speech & Language, 15:55–73. Ronald Rosenfeld. 1997. A whole sentence maximum entropy language model. In Proc. of Automatic Speech Recognition and Understanding (ASRU). Teemu Ruokolainen, Tanel Alum¨ae, and Marcus Dobrinkat. 2010. Using dependency grammar features in whole sentence maximum entropy language model for speech recognition. In Baltic HLT. Holger Schwenk. 2007. Continuous space language models. Computer Speech & Language, 21:492– 518. Ilya Sutskever and Geoffrey E Hinton. 2007. Learning multilevel distributed representations for highdimensional sequences. In International Conference on Artificial Intelligence and Statistics (AISTATS). Zhiqiang Tan. 2015. Optimally adjusted mixture sampling and locally weighted histogram. In Technical Report, Department of Statistics, Rutgers University. Laurent Younes. 1989. Parametric inference for imperfectly observed gibbsian fields. Probability theory and related fields, 82:625–645. 794
2015
76
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 795–804, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Gaussian LDA for Topic Models with Word Embeddings Rajarshi Das*, Manzil Zaheer*, Chris Dyer School of Computer Science Carnegie Mellon University Pittsburgh, PA, 15213, USA {rajarshd, manzilz, cdyer} @cs.cmu.edu Abstract Continuous space word embeddings learned from large, unstructured corpora have been shown to be effective at capturing semantic regularities in language. In this paper we replace LDA’s parameterization of “topics” as categorical distributions over opaque word types with multivariate Gaussian distributions on the embedding space. This encourages the model to group words that are a priori known to be semantically related into topics. To perform inference, we introduce a fast collapsed Gibbs sampling algorithm based on Cholesky decompositions of covariance matrices of the posterior predictive distributions. We further derive a scalable algorithm that draws samples from stale posterior predictive distributions and corrects them with a Metropolis–Hastings step. Using vectors learned from a domain-general corpus (English Wikipedia), we report results on two document collections (20-newsgroups and NIPS). Qualitatively, Gaussian LDA infers different (but still very sensible) topics relative to standard LDA. Quantitatively, our technique outperforms existing models at dealing with OOV words in held-out documents. 1 Introduction Latent Dirichlet Allocation (LDA) is a Bayesian technique that is widely used for inferring the topic structure in corpora of documents. It conceives of a document as a mixture of a small number of topics, and topics as a (relatively sparse) distribution over word types (Blei et al., 2003). These priors are remarkably effective at producing useful *Both student authors had equal contribution. results. However, our intuitions tell us that while documents may indeed be conceived of as a mixture of topics, we should further expect topics to be semantically coherent. Indeed, standard human evaluations of topic modeling performance are designed to elicit assessment of semantic coherence (Chang et al., 2009; Newman et al., 2009). However, this prior preference for semantic coherence is not encoded in the model, and any such observation of semantic coherence found in the inferred topic distributions is, in some sense, accidental. In this paper, we develop a variant of LDA that operates on continuous space embeddings of words— rather than word types—to impose a prior expectation for semantic coherence. Our approach replaces the opaque word types usually modeled in LDA with continuous space embeddings of these words, which are generated as draws from a multivariate Gaussian. How does this capture our preference for semantic coherence? Word embeddings have been shown to capture lexico-semantic regularities in language: words with similar syntactic and semantic properties are found to be close to each other in the embedding space (Agirre et al., 2009; Mikolov et al., 2013). Since Gaussian distributions capture a notion of centrality in space, and semantically related words are localized in space, our Gaussian LDA model encodes a prior preference for semantically coherent topics. Our model further has several advantages. Traditional LDA assumes a fixed vocabulary of word types. This modeling assumption drawback as it cannot handle out of vocabulary (OOV) words in “held out” documents. Zhai and Boyd-Graber (2013) proposed an approach to address this problem by drawing topics from a Dirichlet Process with a base distribution over all possible character strings (i.e., words). While this model can in principle handle unseen words, the only bias toward being included in a particular topic comes from the topic assignments in the rest 795 of the document. Our model can exploit the contiguity of semantically similar words in the embedding space and can assign high topic probability to a word which is similar to an existing topical word even if it has never been seen before. The main contributions of our paper are as follows: We propose a new technique for topic modeling by treating the document as a collection of word embeddings and topics itself as multivariate Gaussian distributions in the embedding space (§3). We explore several strategies for collapsed Gibbs sampling and derive scalable algorithms, achieving asymptotic speed-up over the na¨ıve implementation (§4). We qualitatively show that our topics make intuitive sense and quantitatively demonstrate that our model captures a better representation of a document in the topic space by outperforming other models in a classification task (§5). 2 Background Before going to the details of our model we provide some background on two topics relevant to our work: vector space word embeddings and LDA. 2.1 Vector Space Semantics According to the distributional hypothesis (Harris, 1954), words occurring in similar contexts tend to have similar meaning. This has given rise to data-driven learning of word vectors that capture lexical and semantic properties, which is now a technique of central importance in natural language processing. These word vectors can be used for identifying semantically related word pairs (Turney, 2006; Agirre et al., 2009) or as features in downstream text processing applications (Turian et al., 2010; Guo et al., 2014). Word vectors can either be constructed using low rank approximations of cooccurrence statistics (Deerwester et al., 1990) or using internal representations from neural network models of word sequences (Collobert and Weston, 2008). We use a recently popular and fast tool called word2vec1, to generate skip-gram word embeddings from unlabeled corpus. In this model, a word is used as an input to a log-linear classifier with continuous projection layer and words within a certain window before and after the words are predicted. 1https://code.google.com/p/word2vec/ 2.2 Latent Dirichlet Allocation (LDA) LDA (Blei et al., 2003) is a probabilistic topic model of corpora of documents which seeks to represent the underlying thematic structure of the document collection. They have emerged as a powerful new technique of finding useful structure in an unstructured collection as it learns distributions over words. The high probability words in each distribution gives us a way of understanding the contents of the corpus at a very high level. In LDA, each document of the corpus is assumed to have a distribution over K topics, where the discrete topic distributions are drawn from a symmetric dirichlet distribution. The generative process is as follows. 1. for k = 1 to K (a) Choose topic βk ∼Dir(η) 2. for each document d in corpus D (a) Choose a topic distribution θd ∼Dir(α) (b) for each word index n from 1 to Nd i. Choose a topic zn ∼ Categorical(θd) ii. Choose word wn ∼ Categorical(βzn) As it follows from the definition above, a topic is a discrete distribution over a fixed vocabulary of word types. This modeling assumption precludes new words to be added to topics. However modeling topics as a continuous distribution over word embeddings gives us a way to address this problem. In the next section we describe Gaussian LDA, a straightforward extension of LDA that replaces categorical distributions over word types with multivariate Gaussian distributions over the word embedding space. 3 Gaussian LDA As with multinomial LDA, we are interested in modeling a collection of documents. However, we assume that rather than consisting of sequences of word types, documents consist of sequences of word embeddings. We write v(w) ∈RM as the embedding of word of type w or vd,i when we are indexing a vector in a document d at position i. Since our observations are no longer discrete values but continuous vectors in an Mdimensional space, we characterize each topic k as a multivariate Gaussian distribution with mean µk and covariance Σk. The choice of a Gaussian parameterization is justified by both analytic convenience and observations that Euclidean distances 796 p(zd,i = k | z−(d,i), Vd, ζ, α) ∝(nk,d + αk) × tνk−M+1  vd,i µk, κk + 1 κk Σk  (1) Figure 1: Sampling equation for the collapsed Gibbs sampler; refer to text for a description of the notation. between embeddings correlate with semantic similarity (Collobert and Weston, 2008; Turney and Pantel, 2010; Hermann and Blunsom, 2014). We place conjugate priors on these values: a Gaussian centered at zero for the mean and an inverse Wishart distribution for the covariance. As before, each document is seen as a mixture of topics whose proportions are drawn from a symmetric Dirichlet prior. The generative process can thus be summarized as follows: 1. for k = 1 to K (a) Draw topic covariance Σk ∼ W−1(Ψ, ν) (b) Draw topic mean µk ∼N(µ, 1 κΣk) 2. for each document d in corpus D (a) Draw topic distribution θd ∼Dir(α) (b) for each word index n from 1 to Nd i. Draw a topic zn ∼Categorical(θd) ii. Draw vd,n ∼N(µzn, Σzn) This model has previously been proposed for obtaining indexing representations for audio retrieval (Hu et al., 2012). They use variational/EM method for posterior inference. Although we don’t do any experiment to compare the running time of both approaches, the per-iteration computational complexity is same for both inference methods. We propose a faster inference technique using Cholesky decomposition of covariance matrices which can be applied to both the Gibbs and variational/EM method. However we are not aware of any straightforward way of applying the aliasing trick proposed by (Li et al., 2014) on the variational/EM method which gave us huge improvement on running time (see Figure 2). Another work which combines embedding with topic models is by (Wan et al., 2012) where they jointly learn the parameters of a neural network and a topic model to capture the topic distribution of low dimensional representation of images. 4 Posterior Inference In our application, we observe documents consisting of word vectors and wish to infer the posterior distribution over the topic parameters, proportions, and the topic assignments of individual words. Since there is no analytic form of the posterior, approximations are required. Because of our choice of conjugate priors for topic parameters and proportions, these variables can be analytically integrated out, and we can derive a collapsed Gibbs sampler that resamples topic assignments to individual word vectors, similar to the collapsed sampling scheme proposed by Griffiths and Steyvers (2004). The conditional distribution we need for sampling is shown in Figure 1. Here, z−(d,i) represents the topic assignments of all word embeddings, excluding the one at ith position of document d; Vd is the sequence of vectors for document d; tν′(x | µ′, Σ′) is the multivariate t - distribution with ν′ degrees of freedom and parameters µ′ and Σ′. The tuple ζ = (µ, κ, Σ, ν) represents the parameters of the prior distribution. It should be noted that the first part of the equation which expresses the probability of topic k in document d is the same as that of LDA. This is because the portion of the model which generates a topic for each word (vector) from its document topic distribution is still the same. The second part of the equation which expresses the probability of assignment of topic k to the word vector vd,i given the current topic assignments (aka posterior predictive) is given by a multivariate t distribution with parameters (µk, κk, Σk, νk). The parameters of the posterior predictive distribution are given as (Murphy, 2012): κk = κ + Nk µk = κµ + Nk¯vk κk νk = ν + Nk Σk = Ψk (νk −M + 1) Ψk = Ψ + Ck+ κNk κk (¯vk −µ)(¯vk −µ)⊤ (2) 797 where ¯vk and Ck are given by, ¯vk = P d P i:zd,i=k(vd,i) Nk Ck = X d X i:zd,i=k (vd,i −¯vk)(vd,i −¯vk)⊤ Here ¯vk is the sample mean and Ck is the scaled form of sample covariance of the vectors with topic assignment k. Nk represents the count of words assigned to topic k across all documents. Intuitively the parameters µk and Σk represents the posterior mean and covariance of the topic distribution and κk, νk represents the strength of the prior for mean and covariance respectively. Analysis of running time complexity As can be seen from (1), for computation of the posterior predictive we need to evaluate the determinant and inverse of the posterior covariance matrix. Direct na¨ıve computation of these terms require O(M3) operations. Moreover, during sampling as words get assigned to different topics, the parameters (µk, κk, Ψk, νk) associated with a topic changes and hence we have to recompute the determinant and inverse matrix. Since these step has to be recomputed several times (as many times as number of words times number of topics in one Gibbs sweep, in the worst case), it is critical to make the process as efficient as possible. We speed up this process by employing a combination of modern computational techniques and mathematical (linear algebra) tricks, as described in the following subsections. 4.1 Faster sampling using Cholesky decomposition of covariance matrix Having another look at the posterior equation for Ψk, we can re-write the equation as: Ψk = Ψ + Ck + κNk κk (¯vk −µ)(¯vk −µ)⊤ = Ψ + X d X i:zd,i=k vd,iv⊤ d,i −κkµkµ⊤ k + κµµ⊤. (3) During sampling when we are computing the assignment probability of topic k to vd,i, we need to calculate the updated parameters of the topic. Using (3) it can be shown that Ψk can be updated from current value of Ψk, after updating κk.νk and µk, as follows: Ψk ←Ψk + κk κk −1 (µk −vd,i) (µk −vd,i)⊤. (4) This equation has the form of a rank 1 update, hinting towards use of Cholesky decomposition. If we have the Cholesky decomposition of Ψk computed, then we have tools to update Ψk cheaply. Since Ψk and Σk are off by only a scalar factor, we can equivalently talk about Σk. Equation (4) can also be understood in the following way. During sampling, when a word embedding vd,i gets a new assignment to a topic, say k, then the new value of the topic covariance can be computed from the current one using just a rank 1 update.2 We next describe how to exploit the Cholesky decomposition representation to speed up computations. For sake of completeness, any symmetric M × M real matrix Σk is said to be positive definite if ∀z ∈RM : z⊤Σkz > 0. The Cholesky decomposition of such a symmetric positive definite matrix Σk is nothing but its decomposition into the product of some lower triangular matrix L and its transpose, i.e. Σk = LL⊤. Finding this factorization also take cubic operation. However given Cholesky decomposition of Σk, after a rank 1 update (or downdate), i.e. the operation: Σk ←Σk + zz⊤ we can find the factorization of new Σk in just quadratic time (Stewart, 1998). We will use this trick to speed up the computations3. Basically, instead of computing determinant and inverse again in cubic time, we will use such rank 1 update (downdate) to find new determinant and inverse in an efficient manner as explained in details below. To compute the density of the posterior predictive t−distibution, we need to compute the determinant |Σk| and the term of the form (vd,i − µk)⊤Σ−1 k (vd,i −µk). The Cholesky decomposition of the covariance matrix can be used for efficient computation of these expression as shown below. 2Similarly the covariance of the old topic assignment of the word w can be computed using a rank 1 downdate 3For our experiments, we set the prior covariance to be 3*I, which is a positive definite matrix. 798 Computation of determinant: The determinant of Σk can be computed from from its Cholesky decomposition L as: log(|Σk|) = 2 × M X i=1 log (Li,i) . This takes linear time in the order of dimension and is clearly a significant gain from cubic time complexity. Computation of (vd,i −µk)⊤Σ−1 k (vd,i −µ): Let b = (vd,i −µk). Now b⊤Σ−1b can be written as b⊤Σ−1b = b⊤(LL⊤)−1b = bT (L−1)⊤L−1b = (L−1b)⊤(L−1b) Now (L−1b) is the solution of the equation Lx = b. Also since L is a lower triangular matrix, this equation can be solved easily using forward substitution. Lastly we will have to take an inner product of x and x⊤to get the value of (vd,i−µk)⊤Σ−1(vd,i−µk). This step again takes quadratic time and is again a savings from the cubic time complexity. 4.2 Further reduction of sampling complexity using Alias Sampling Although Cholesky trick helps us to reduce the sampling complexity of a embedding to O(KM2), it can still be impractical.In Gaussian LDA, the Gibbs sampling equation (1) can be split into two terms. The first term nk,d × tνk−M+1  vd,i µk, κk+1 κk Σk  denotes the document contribution and the second term αk × tνk−M+1  vd,i µk, κk+1 κk Σk  denotes the language model contribution. Empirically one can make two observations about these terms. First, nk,d is often a sparse vector, as a document most likely contains only a few of the topics. Secondly, topic parameters (µk, Σk) captures global phenomenon, and rather change relatively slowly over the iterations. We can exploit these findings to avoid the naive approach to draw a sample from (1). In particular, we compute the document-specific sparse term exactly and for the remainder language model term we borrow idea from (Li et al., 2014). We use a slightly stale distribution for the language model. Then Metropolis Hastings (MH) algorithm allows us to convert the stale sample Time #104 0 1 2 3 4 5 Log-Likelihood 50 55 60 65 70 75 80 Naive Cholesky Alias+Cholesky Figure 2: Plot comparing average log-likelihood vs time (in sec) achieved after applying each trick on the NIPS dataset. The shapes on each curve denote end of each iteration. into a fresh one, provided that we compute ratios between successive states correctly. It is sufficient to run MH for a few number of steps because the stale distribution acting as the proposal is very similar to the target. This is because, as pointed out earlier, the language model term does not change too drastically whenever we resample a single word. The number of words is huge, hence the amount of change per word is concomitantly small. (Only if one could convert stale bread into fresh one, it would solve world’s food problem!) The exercise of using stale distribution and MH steps is advantageous because sampling from it can be carried out in O(1) amortized time, thanks to alias sampling technique (Vose, 1991). Moreover, the task of building the alias tables can be outsourced to other cores. With the combination of both Cholesky and Alias tricks, the sampling complexity can thus be brought down to O(KdM2) where Kd represents the number of actually instantiated topics in the document and Kd ≪K. In particular, we plot the sampling rate achieved naively, with Cholesky (CH) trick and with Cholesky+Alias (A+CH) trick in figure 2 demonstrating better likelihood at much less time. Also after initial few iterations, the time per iteration of A+CH trick is 9.93 times less than CH and 53.1 times less than naive method. This is because initially we start with random initialization of words to topics, but after few iterations the nk,d vector starts to become sparse. 799 5 Experiments In this section we evaluate our Word Vector Topic Model on various experimental tasks. Specifically we wish to determine: • Is our model is able to find coherent and meaningful topics? • Is our model able to infer the topic distribution of a held-out document even when the document contains words which were previously unseen? We run our experiments4 on two datasets 20NEWSGROUP5 and NIPS6. All the datasets were tokenized and lowercased with cdec (Dyer et al., 2010). 5.1 Topic Coherence Quantitative Analysis Typically topic models are evaluated based on the likelihood of held-out documents. But in this case, it is not correct to compare perplexities with models which do topic modeling on words. Since our topics are continuous distributions, the probability of a word vector is given by its density w.r.t the normal distribution based on its topic assignment, instead of a probability mass from a discrete topic distribution. Moreover, (Chang et al., 2009) showed that higher likelihood of held-out documents doesn’t necessarily correspond to human perception of topic coherence. Instead to measure topic coherence we follow (Newman et al., 2009) to compute the Pointwise Mutual Information (PMI) of topic words w.r.t wikipedia articles. We extract the document co-occurrence statistics of topic words from Wikipedia and compute the score of a topic by averaging the score of the top 15 words of the topic. A higher PMI score implies a more coherent topic as it means the topic words usually co-occur in the same document. In the last line of Table 1, we present the PMI score for some of the topics for both Gaussian LDA and traditional multinomial 4Our implementation is available at https: //github.com/rajarshd/Gaussian_LDA 5A collection of newsgroup documents partitioned into 20 news groups. After pre-processing we had 18768 documents. We randomly selected 2000 documents as our test set. This dataset is publicly available at http://qwone.com/ ˜jason/20Newsgroups/ 6A collection of 1740 papers from the proceedings of Neural Information Processing System. The dataset is available at http://www.cs.nyu.edu/˜roweis/data. html LDA. It can be seen that Gaussian LDA is a clear winner, achieving an average 275% higher score on average. However, we are using embeddings trained on Wikipedia corpus itself, and the PMI measure is computed from co-occurrence in the Wikipedia corpus. As a result, our model is definitely biased towards producing higher PMI. Nevertheless Wikipedia PMI is a believed to be a good measure of semantic coherence. Qualitative Analysis Table 1 shows some top words from topics from Gaussian-LDA and LDA on the 20-news dataset for K = 50. The words in Gaussian-LDA are ranked based on their density assigned to them by the posterior predictive distribution in the final sample. As shown, Gaussian LDA is able to capture several intuitive topics in the corpus such as sports, government, ‘religion’, ’universities’, ‘tech’, ‘finance’ etc. One interesting topic discovered by our model (on both 20-news and NIPS dataset) is the collection of human names, which was not captured by classic LDA. While one might imagine that names associated with particular topics might be preferable to a ‘names-in-general’ topic, this ultimately is a matter of user preference. More substantively, classic LDA failed to identify the ‘finance’ topics. We also noticed that there were certain words (‘don’, ‘writes’, etc) which often came as a top word in many topics in classic LDA. However our model was not able to capture the ‘space’ topics which LDA was able to identify. Also we visualize a part of the continuous space where the word embedding is performed. For this task we performed the Principal Component Analysis (PCA) over all the word vectors and plot the first two components as shown in Figure 3. We can see clear separations between some of the clusters of topics as depicted. The other topics would be separated in other dimensions. 5.2 Performance on document containing new words In this experiment we evaluate the performance of our model on documents which contains previously unseen words. It should be noted that traditional topic modeling algorithms will typically ignore such words while inferring the topic distribution and hence might miss out important words. The continuous topic distributions of the Word Vector Topic Model on the other hand, will be able 800 Gaussian LDA topics hostile play government people university hardware scott market gun murder round state god program interface stevens buying rocket violence win group jews public mode graham sector military victim players initiative israel law devices walker purchases force testifying games board christians institute rendering tom payments machine provoking goal legal christian high renderer russell purchase attack legal challenge bill great research user baker company operation citizens final general jesus college computers barry owners enemy conflict playing policy muslims center monitor adams paying fire victims hitting favor religion study static jones corporate flying rape match office armenian reading encryption joe limited defense laws ball political armenians technology emulation palmer loans warning violent advance commission church programs reverse cooper credit soldiers trial participants private muslim level device robinson financing guns intervention scores federal bible press target smith fees operations 0.8302 0.9302 0.4943 2.0306 0.5216 2.3615 2.7660 1.4999 1.1847 Multinomial LDA topics turkish year people god university window space ken gun armenian writes president jesus information image nasa stuff people people game mr people national color gov serve law armenians good don bible research file earth line guns armenia team money christian center windows launch attempt don turks article government church april program writes den state turkey baseball stephanopoulos christ san display orbit due crime don don time christians number jpeg moon peaceful weapons greek games make life year problem satellite article firearms soviet season clinton time conference screen article served police time runs work don washington bit shuttle warrant control genocide players tax faith california files lunar lotsa writes government hit years good page graphics henry occurred rights told time ll man state gif data writes article killed apr ve law states writes flight process laws 0.3394 0.2036 0.1578 0.7561 0.0039 1.3767 1.5747 -0.0721 0.2443 Table 1: Top words of some topics from Gaussian-LDA and multinomial LDA on 20-newsgroups for K = 50. Words in Gaussian LDA are ranked based on density assigned to them by the posterior predictive distribution. The last row for each method indicates the PMI score (w.r.t. Wikipedia co-occurence) of the topics fifteen highest ranked words. to assign topics to an unseen word, if we have the vector representation of the word. Given the recent development of fast and scalable methods of estimating word embeddings, it is possible to train them on huge text corpora and hence it makes our model a viable alternative for topic inference on documents with new words. Experimental Setup: Since we want to capture the strength of our model on documents containing unseen words, we select a subset of documents and replace words of those documents by its synonyms if they haven’t occurred in the corpus before. We obtain the synonym of a word using two existing resources and hence we create two such datasets. For the first set, we use the Paraphrase Database (Ganitkevitch et al., 2013) to get the lexical paraphrase of a word. The paraphrase database7 is a semantic lexicon containing around 169 million paraphrase pairs of which 7.6 million are lexical (one word to one word) paraphrases. The dataset comes in varying size ranges starting from S to XXXL in increasing order of size and decreasing order of paraphrasing confidence. For our experiments we selected the L size of the paraphrase database. The second set was obtained using WordNet (Miller, 1995), a large human annotated lexicon for English that groups words into sets of synonyms called synsets. To obtain the synonym of a word, we first label the words with their part-ofspeech using the Stanford POS tagger (Toutanova et al., 2003). Then we use the WordNet database 7http://www.cis.upenn.edu/˜ccb/ppdb/ 801 1st Principal Component -1.4 -1.2 -1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 2nd Principal Component -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2 devices interface user static monitor rendering emulation muslims muslim armenians armenian joe graham cooperbarry palmer Figure 3: The first two principal components for the word embeddings of the top words of topics shown in Table 1 have been visualized. Each blob represents a word color coded according to its topic in the Table 1. to get the synonym from its sysnset.8 We select the first synonym from the synset which hasn’t occurred in the corpus before. On the 20-news dataset (vocab size = 18,179 words, test corpus size = 188,694 words), a total of 21,919 words (2,741 distinct words) were replaced by synonyms from PPDB and 38,687 words (2,037 distinct words) were replaced by synonyms from Wordnet. Evaluation Benchmark: As mentioned before traditional topic model algorithms cannot handle OOV words. So comparing the performance of our document with those models would be unfair. Recently (Zhai and Boyd-Graber, 2013) proposed an extension of LDA (infvoc) which can incorporate new words. They have shown better performances in a document classification task which uses the topic distribution of a document as features on the 20-news group dataset as compared to other fixed vocabulary algorithms. Even though, the infvoc model can handle OOV words, it will most likely not assign high probability to a new topical word when it encounters it for the first time since it is directly proportional to the number of times the word has been observed On the other hand, our model could assign high probability to the word if its corresponding embedding gets a high probability from one of the topic gaussians. With the experimental setup mentioned before, we want to evaluate performance of this property of 8We use the JWI toolkit (Finlayson, 2014) our model. Using the topic distribution of a document as features, we try to classify the document into one of the 20 news groups it belongs to. If the document topic distribution is modeled well, then our model should be able to do a better job in the classification task. To infer the topic distribution of a document we follow the usual strategy of fixing the learnt topics during the training phase and then running Gibbs sampling on the test set (G-LDA (fix) in table 2). However infvoc is an online algorithm, so it would be unfair to compare our model which observes the entire set of documents during test time. Therefore we implement the online version of our algorithm using Gibbs sampling following (Yao et al., 2009). We input the test documents in batches and do inference on those batches independently also sampling for the topic parameter, along the lines of infvoc. The batch size for our experiments are mentioned in parentheses in table 2. We classify using the multi class logistic regression classifier available in Weka (Hall et al., 2009). It is clear from table 2 that we outperform infvoc in all settings of our experiments. This implies that even if new documents have significant amount of new words, our model would still do a better job in modeling it. We also conduct an experiment to check the actual difference between the topic distribution of the original and synthetic documents. Let h and h′ denote the topic vectors of the original and synthetic documents. Table 3 shows the average l1, l2 and l∞norm of (h −h′) of the test documents in the NIPS dataset. A low value of these metrics indicates higher similarity. As shown in the table, Gaussian LDA performs better here too. 6 Conclusion and Future Work While word embeddings have been incorporated to produce state-of-the-art results in numerous supervised natural language processing tasks from the word level to document level ; however, they have played a more minor role in unsupervised learning problems. This work shows some of the promise that they hold in this domain. Our model can be extended in a number of potentially useful, but straightforward ways. First, DPMM models of word emissions would better model the fact that identical vectors will be generated multiple times, and perhaps add flexibility to the topic distributions that can be captured, without sacrificing our 802 Model Accuracy PPDB WordNet infvoc 28.00% 19.30% G-LDA (fix) 44.51% 43.53% G-LDA (1) 44.66% 43.47% G-LDA (100) 43.63% 43.11% G-LDA (1932) 44.72% 42.90% Table 2: Accuracy of our model and infvoc on the synthetic datasets. In Gaussian LDA fix, the topic distributions learnt during training were fixed; GLDA(1, 100, 1932) is the online implementation of our model where the documents comes in minibatches. The number in parenthesis denote the size of the batch. The full size of the test corpus is 1932. Model PPDB (Mean Deviation) L1 L2 L∞ infvoc 94.95 7.98 1.72 G-LDA (fix) 15.13 1.81 0.66 G-LDA (1) 15.71 1.90 0.66 G-LDA (10) 15.76 1.97 0.66 G-LDA (174) 14.58 1.66 0.66 Table 3: This table shows the Average L1 Deviation, Average L2 Deviation, Average L∞Deviation for the difference of the topic distribution of the actual document and the synthetic document on the NIPS corpus. Compared to infvoc, G-LDA achieves a lower deviation of topic distribution inferred on the synthetic documents with respect to actual document. The full size of the test corpus is 174. preference for topical coherence. More broadly still, running LDA on documents consisting of different modalities than just text is facilitated by using the lingua franca of vector space representations, so we expect numerous interesting applications in this area. An interesting extension to our work would be the ability to handle polysemous words based on multi-prototype vector space models (Neelakantan et al., 2014; Reisinger and Mooney, 2010) and we keep this as an avenue for future research. Acknowledgments We thank the anonymous reviewers and Manaal Faruqui for helpful comments and feedback. References Eneko Agirre, Enrique Alfonseca, Keith Hall, Jana Kravalova, Marius Pas¸ca, and Aitor Soroa. 2009. A study on similarity and relatedness using distributional and wordnet-based approaches. In Proceedings of NAACL. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993–1022, March. Jonathan Chang, Jordan Boyd-Graber, Chong Wang, Sean Gerrish, and David M. Blei. 2009. Reading tea leaves: How humans interpret topic models. In Neural Information Processing Systems. Ronan Collobert and Jason Weston. 2008. A unified architecture for natural language processing: deep neural networks with multitask learning. In Proceedings of ICML. S. C. Deerwester, S. T. Dumais, T. K. Landauer, G. W. Furnas, and R. A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American Society for Information Science. Chris Dyer, Adam Lopez, Juri Ganitkevitch, Johnathan Weese, Ferhan Ture, Phil Blunsom, Hendra Setiawan, Vladimir Eidelman, and Philip Resnik. 2010. cdec: A decoder, alignment, and learning framework for finite-state and context-free translation models. In Proceedings of ACL. Mark Finlayson, 2014. Proceedings of the Seventh Global Wordnet Conference, chapter Java Libraries for Accessing the Princeton Wordnet: Comparison and Evaluation, pages 78–85. Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The paraphrase database. In Proceedings of NAACL-HLT, pages 758–764, Atlanta, Georgia, June. Association for Computational Linguistics. T. L. Griffiths and M. Steyvers. 2004. Finding scientific topics. Proceedings of the National Academy of Sciences, 101:5228–5235, April. Jiang Guo, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Revisiting embedding features for simple semi-supervised learning. In Proceedings of EMNLP. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: An update. SIGKDD Explor. Newsl., 11(1):10–18, November. Zellig Harris. 1954. Distributional structure. Word, 10(23):146–162. Karl Moritz Hermann and Phil Blunsom. 2014. Multilingual models for compositional distributed semantics. arXiv preprint arXiv:1404.4641. 803 Pengfei Hu, Wenju Liu, Wei Jiang, and Zhanlei Yang. 2012. Latent topic model based on Gaussian-LDA for audio retrieval. In Pattern Recognition, volume 321 of CCIS, pages 556–563. Springer. Aaron Q. Li, Amr Ahmed, Sujith Ravi, and Alexander J. Smola. 2014. Reducing the sampling complexity of topic models. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’14. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 746–751, Atlanta, Georgia, June. Association for Computational Linguistics. George A. Miller. 1995. Wordnet: A lexical database for english. Commun. ACM, 38(11):39–41, November. Kevin P. Murphy. 2012. Machine Learning: A Probabilistic Perspective. The MIT Press. Arvind Neelakantan, Jeevan Shankar, Alexandre Passos, and Andrew McCallum. 2014. Efficient nonparametric estimation of multiple embeddings per word in vector space. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25-29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL. David Newman, Sarvnaz Karimi, and Lawrence Cavedon. 2009. External evaluation of topic models. pages 11–18, December. Joseph Reisinger and Raymond J. Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10. G. Stewart. 1998. Matrix Algorithms. Society for Industrial and Applied Mathematics. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 173–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In Proc. of ACL. Peter D. Turney and Patrick Pantel. 2010. From frequency to meaning : Vector space models of semantics. JAIR, pages 141–188. Peter D. Turney. 2006. Similarity of semantic relations. Comput. Linguist., 32(3):379–416, September. Michael D. Vose. 1991. A linear algorithm for generating random numbers with a given distribution. Software Engineering, IEEE Transactions on. Li Wan, Leo Zhu, and Rob Fergus. 2012. A hybrid neural network-latent topic model. In Neil D. Lawrence and Mark A. Girolami, editors, Proceedings of the Fifteenth International Conference on Artificial Intelligence and Statistics (AISTATS-12), volume 22, pages 1287–1294. Limin Yao, David Mimno, and Andrew McCallum. 2009. Efficient methods for topic model inference on streaming document collections. In Proceedings of the 15th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD ’09, pages 937–946, New York, NY, USA. ACM. Ke Zhai and Jordan L. Boyd-Graber. 2013. Online latent dirichlet allocation with infinite vocabulary. In ICML (1), volume 28 of JMLR Proceedings, pages 561–569. JMLR.org. 804
2015
77
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 805–814, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Pairwise Neural Machine Translation Evaluation Francisco Guzm´an Shafiq Joty Llu´ıs M`arquez and Preslav Nakov ALT Research Group Qatar Computing Research Institute — HBKU, Qatar Foundation {fguzman,sjoty,lmarquez,pnakov}@qf.org.qa Abstract We present a novel framework for machine translation evaluation using neural networks in a pairwise setting, where the goal is to select the better translation from a pair of hypotheses, given the reference translation. In this framework, lexical, syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations, and fed into a multi-layer neural network that models the interaction between each of the hypotheses and the reference, as well as between the two hypotheses. These compact representations are in turn based on word and sentence embeddings, which are learned using neural networks. The framework is flexible, allows for efficient learning and classification, and yields correlation with humans that rivals the state of the art. 1 Introduction Automatic machine translation (MT) evaluation is a necessary step when developing or comparing MT systems. Reference-based MT evaluation, i.e., comparing the system output to one or more human reference translations, is the most common approach. Existing MT evaluation measures typically output an absolute quality score by computing the similarity between the machine and the human translations. In the simplest case, the similarity is computed by counting word n-gram matches between the translation and the reference. This is the case of BLEU (Papineni et al., 2002), which has been the standard for MT evaluation for years. Nonetheless, more recent evaluation measures take into account various aspects of linguistic similarity, and achieve better correlation with human judgments. Having absolute quality scores at the sentence level allows to rank alternative translations for a given source sentence. This is useful, for instance, for statistical machine translation (SMT) parameter tuning, for system comparison, and for assessing the progress during MT system development. The quality of automatic MT evaluation metrics is usually assessed by computing their correlation with human judgments. To that end, quality rankings of alternative translations have been created by human judges. It is known that assigning an absolute score to a translation is a difficult task for humans. Hence, ranking-based evaluations, where judges are asked to rank the output of 2 to 5 systems, have been used in recent years, which has yielded much higher inter-annotator agreement (Callison-Burch et al., 2007). These human quality judgments can be used to train automatic metrics. This supervised learning can be oriented to predict absolute scores, e.g., using regression (Albrecht and Hwa, 2008), or rankings (Duh, 2008; Song and Cohn, 2011). A particular case of the latter is used to learn in a pairwise setting, i.e., given a reference and two alternative translations (or hypotheses), the task is to decide which one is better. This setting emulates closely how human judges perform evaluation assessments in reality, and can be used to produce rankings for an arbitrarily large number of hypotheses. In this pairwise setting, the challenge is to learn, from a pair of hypotheses, which are the features that help to discriminate the better from the worse translation. Although the pairwise setting does not produce absolute quality scores (i.e., it is not an evaluation metric applicable to a single translation), it is useful and arguably sufficient for most evaluation and MT development scenarios.1 1We do not argue that the pairwise approach is better than the direct estimation of human quality scores. Both approaches have pros and cons; we see them as complementary. 805 Recently, Guzm´an et al. (2014a) presented a learning framework for this pairwise setting, based on preference kernels and support vector machines (SVM). They obtained promising results using syntactic and discourse-based structures. However, using convolution kernels over complex structures comes at a high computational cost both at training and at testing time because the use of kernels requires that the SVM operate in the much slower dual space. Thus, some simplification is needed to make it practical. While there are some solutions in the kernel-based learning framework to alleviate the computational burden, in this paper we explore an entirely different direction. We present a novel neural-based architecture for learning in the pairwise setting for MT evaluation. Lexical, syntactic and semantic information from the reference and the two hypotheses is compacted into relatively small distributed vector representations and fed into the input layer, together with a set of individual real-valued features coming from simple pre-existing MT evaluation metrics. A hidden layer, motivated by our intuitions on the pairwise ranking problem, is used to capture interactions between the relevant input components. Finally, we present a task-oriented cost function, specifically tailored for this problem. Our evaluation results on the WMT12 metrics task benchmark datasets (Callison-Burch et al., 2012) show very high correlation with human judgments. These results clearly surpass (Guzm´an et al., 2014a) and are comparable to the best previously reported results for this dataset, achieved by DiscoTK (Joty et al., 2014), which is a much heavier combination-based metric. Another advantage of the proposed architecture is efficiency. Due to the vector-based compression of the linguistic structure and the relatively reduced size of the network, testing is fast, which would greatly facilitate the practical use of this approach in real MT evaluation and development. Finally, we empirically show that syntacticallyand semantically-oriented embeddings can be incorporated to produce sizeable and cumulative gains in performance over a strong combination of pre-existing MT evaluation measures (BLEU, NIST, METEOR, and TER). This is promising evidence towards our longer-term goal of defining a general platform for integrating varied linguistic information and for producing more informed MT evaluation measures. 2 Related Work Contemporary MT evaluation measures have evolved beyond simple lexical matching, and now take into account various aspects of linguistic structures, including synonymy and paraphrasing (Lavie and Denkowski, 2009), syntax (Gim´enez and M`arquez, 2007; Popovi´c and Ney, 2007; Liu and Gildea, 2005), semantics (Gim´enez and M`arquez, 2007; Lo et al., 2012), and even discourse (Comelles et al., 2010; Wong and Kit, 2012; Guzm´an et al., 2014b; Joty et al., 2014). The combination of several of these aspects has led to improved results in metric evaluation campaigns, such as the WMT metrics task (Bojar et al., 2014). In this paper, we present a general framework for learning to rank translations in the pairwise setting, using information from several linguistic representations of the translations and references. This work has connections with the ranking-based approaches for learning to reproduce human judgments of MT quality. In particular, our setting is similar to that of Duh (2008), but differs from it both in terms of the feature representation and of the learning framework. For instance, we integrate several layers of linguistic information, while Duh (2008) only used lexical and POS matches as features. Secondly, we use information about both the reference and the two alternative translations simultaneously in a neural-based learning framework capable of modeling complex interactions between the features. Another related work is that of Kulesza and Shieber (2004), in which lexical and syntactic features, together with other metrics, e.g., BLEU and NIST, are used in an SVM classifier to discriminate good from bad translations. However, their setting is not pairwise comparison, but a classification task to distinguish human- from machineproduced translations. Moreover, in their work, using syntactic features decreased the correlation with human judgments dramatically (although classification accuracy improved), while in our case the effect is positive. In our previous work (Guzm´an et al., 2014a), we introduced a learning framework for the pairwise setting, based on preference kernels and SVMs. We used lexical, POS, syntactic and discourse-based information in the form of treelike structures to learn to differentiate better from worse translations. 806 However, in that work we used convolution kernels, which is computationally expensive and does not scale well to large datasets and complex structures such as graphs and enriched trees. This inefficiency arises both at training and testing time. Thus, here we use neural embeddings and multilayer neural networks, which yields an efficient learning framework that works significantly better on the same datasets (although we are not using exactly the same information for learning). To the best of our knowledge, the application of structured neural embeddings and a neural network learning architecture for MT evaluation is completely novel. This is despite the growing interest in recent years for deep neural nets (NNs) and word embeddings with application to a myriad of NLP problems. For example, in SMT we have observed an increased use of neural nets for language modeling (Bengio et al., 2003; Mikolov et al., 2010) as well as for improving the translation model (Devlin et al., 2014; Sutskever et al., 2014). Deep learning has spread beyond language modeling. For example, recursive NNs have been used for syntactic parsing (Socher et al., 2013a) and sentiment analysis (Socher et al., 2013b). The increased use of NNs by the NLP community is in part due to (i) the emergence of tools such as word2vec (Mikolov et al., 2013a) and GloVe (Pennington et al., 2014), which have enabled NLP researchers to learn word embeddings, and (ii) unified learning frameworks, e.g., (Collobert et al., 2011), which cover a variety of NLP tasks such as part-of-speech tagging, chunking, named entity recognition, and semantic role labeling. While in this work we make use of widely available pre-computed structured embeddings, the novelty of our work goes beyond the type of information considered as input, and resides on the way it is integrated to a neural network architecture that is inspired by our intuitions about MT evaluation. 3 Neural Ranking Model Our motivation for using neural networks for MT evaluation is twofold. First, to take advantage of their ability to model complex non-linear relationships efficiently. Second, to have a framework that allows for easy incorporation of rich syntactic and semantic representations captured by word embeddings, which are in turn learned using deep learning. 3.1 Learning Task Given two translation hypotheses t1 and t2 (and a reference translation r), we want to tell which of the two is better.2 Thus, we have a binary classification task, which is modeled by the class variable y, defined as follows: y =  1 if t1 is better than t2 given r 0 if t1 is worse than t2 given r (1) We model this task using a feed-forward neural network (NN) of the form: p(y|t1, t2, r) = Ber(y|f(t1, t2, r)) (2) which is a Bernoulli distribution of y with parameter σ = f(t1, t2, r), defined as follows: f(t1, t2, r) = sig(wT v φ(t1, t2, r) + bv) (3) where sig is the sigmoid function, φ(x) defines the transformations of the input x through the hidden layer, wv are the weights from the hidden layer to the output layer, and bv is a bias term. 3.2 Network Architecture In order to decide which hypothesis is better given the tuple (t1, t2, r) as input, we first map the hypotheses and the reference to a fixed-length vector [xt1, xt2, xr], using syntactic and semantic embeddings. Then, we feed this vector as input to our neural network, whose architecture is shown in Figure 1. f(t1,t2,r) ψ(t1,r) ψ(t2,r) h12 h1r h2r v xt2 xr xt1 t1 t2 r sentences embeddings pairwise nodes pairwise features output layer Figure 1: Overall architecture of the neural network. In our architecture, we model three types of interactions, using different groups of nodes in the hidden layer. We have two evaluation groups h1r and h2r that model how similar each hypothesis ti is to the reference r. 2In this work, we do not learn to predict ties, and ties are excluded from our training data. 807 The vector representations of the hypothesis (i.e., xt1 or xt2) together with the reference (i.e., xr) constitute the input to the hidden nodes in these two groups. The third group of hidden nodes h12, which we call similarity group, models how close t1 and t2 are. This might be useful as highly similar hypotheses are likely to be comparable in quality, irrespective of whether they are good or bad in absolute terms. The input to each of these groups is represented by concatenating the vector representations of the two components participating in the interaction, i.e., x1r = [xt1, xr], x2r = [xt2, xr], x12 = [xt1, xt2]. In summary, the transformation φ(t1, t2, r) = [h12, h1r, h2r] in our NN architecture can be written as follows: h1r = g(W1rx1r + b1r) h2r = g(W2rx2r + b2r) h12 = g(W12x12 + b12) where g(.) is a non-linear activation function (applied component-wise), W ∈RH×N are the associated weights between the input layer and the hidden layer, and b are the corresponding bias terms. In our experiments, we used tanh as an activation function, rather than sig, to be consistent with how parts of our input vectors were generated.3 In addition, our model allows to incorporate external sources of information by enabling skip arcs that go directly from the input to the output, skipping the hidden layer. In our setting, these arcs represent pairwise similarity features between the translation hypotheses and the reference (e.g., the BLEU scores of the translations). We denote these pairwise external feature sets as ψ1r = ψ(t1, r) and ψ2r = ψ(t2, r). When we include the external features in our architecture, the activation at the output, i.e., eq. (3), can be rewritten as follows: f(t1, t2, r) = sig(wT v [φ(t1, t2, r), ψ1r, ψ2r] + bv) 3.3 Network Training The negative log likelihood of the training data for the model parameters θ = (W12, W1r, W2r, wv, b12, b1r, b2r, bv) can be written as follows: Jθ = − X n yn log ˆynθ + (1 −yn) log (1 −ˆynθ) (4) 3Many of our input representations consist of word embeddings trained with neural networks that used tanh as an activation function. In the above formula, ˆynθ = fn(t1, t2, r) is the activation at the output layer for the n-th data instance. It is also common to use a regularized cost function by adding a weight decay penalty (e.g., L2 or L1 regularization) and to perform maximum aposteriori (MAP) estimation of the parameters. We trained our network with stochastic gradient descent (SGD), mini-batches and adagrad updates (Duchi et al., 2011), using Theano (Bergstra et al., 2010). 4 Experimental Setup In this section, we describe the different aspects of our general experimental setup (we will discuss some extensions thereof in Section 6), starting with a description of the input representations we use to capture the syntactic and semantic characteristics of the two hypothesis translations and the corresponding reference, as well as the datasets used to evaluate the performance of our model. 4.1 Word Embedding Vectors Word embeddings play a crucial role in our model, since they allow us to model complex relations between the translations and the reference using syntactic and semantic vector representations. Syntactic vectors. We generate a syntactic vector for each sentence using the Stanford neural parser (Socher et al., 2013a), which generates a 25dimensional vector as a by-product of syntactic parsing using a recursive NN. Below we will refer to these vectors as SYNTAX25. Semantic vectors. We compose a semantic vector for a given sentence using the average of the embedding vectors for the words it contains (Mitchell and Lapata, 2010). We use pre-trained, fixedlength word embedding vectors produced by (i) GloVe (Pennington et al., 2014), (ii) COMPOSES (Baroni et al., 2014), and (iii) word2vec (Mikolov et al., 2013b). Our primary representation is based on 50dimensional GloVe vectors, trained on Wikipedia 2014+Gigaword 5 (6B tokens), to which below we will refer as WIKI-GW25. Furthermore, we experiment with WIKIGW300, the 300-dimensional GloVe vectors trained on the same data, as well as with the CC300-42B and CC-300-840B, 300-dimensional GloVe vectors trained on 42B and on 840B tokens from Common Crawl. 808 We also experiment with the pre-trained, 300dimensional word2vec embedding vectors, or WORD2VEC300, trained on 100B words from Google News. Finally, we use COMPOSES400, the 400-dimensional COMPOSES vectors trained on 2.8 billion tokens from ukWaC, the English Wikipedia, and the British National Corpus. 4.2 Tuning and Evaluation Datasets We experiment with datasets of segment-level human rankings of system outputs from the WMT11, WMT12 and WMT13 Metrics shared tasks (Callison-Burch et al., 2011; Callison-Burch et al., 2012; Mach´aˇcek and Bojar, 2013). We focus on translating into English, for which the WMT11 and WMT12 datasets can be split by source language: Czech (cs), German (de), Spanish (es), and French (fr); WMT13 also has Russian (ru). 4.3 Evaluation Score We evaluate our metrics in terms of correlation with human judgments measured using Kendall’s τ. We report τ for the individual languages as well as macro-averaged across all languages. Note that there were different versions of τ at WMT over the years. Prior to 2013, WMT used a strict version, which was later relaxed at WMT13 and further revised at WMT14. See (Mach´aˇcek and Bojar, 2014) for a discussion. Here we use the strict version used at WMT11 and WMT12. 4.4 Experimental Settings Datasets: We train our neural models on WMT11 and we evaluate them on WMT12. We further use a random subset of 5,000 examples from WMT13 as a validation set to implement early stopping. Early stopping: We train on WMT11 for up to 10,000 epochs, and we calculate Kendall’s τ on the development set after each epoch. We then select the model that achieves the highest τ on the validation set; in case of ties for the best τ, we select the latest epoch that achieved the highest τ. Network parameters: We train our neural network using SGD with adagrad, an initial learning rate of η = 0.01, mini-batches of size 30, and L2 regularization with a decay parameter λ = 1e−4. We initialize the weights for our matrices by sampling from a uniform distribution following (Bengio and Glorot, 2010). We further set the size of each of our pairwise hidden layers H to four nodes, and we normalize the input data using minmax to map the feature values to the range [−1, 1]. 5 Experiments and Results The main findings of our experiments are shown in Table 1. Section I of Table 1 shows the results for four commonly-used metrics for MT evaluation that compare a translation hypothesis to the reference(s) using primarily lexical information like word and n-gram overlap (even though some allow paraphrases): BLEU, NIST, TER, and METEOR (Papineni et al., 2002; Doddington, 2002; Snover et al., 2006; Denkowski and Lavie, 2011). We will refer to the set of these four metrics as 4METRICS. These metrics are not tuned and achieve Kendall’s τ between 18.5 and 23.5. Section II of Table 1 shows the results for multilayer neural networks trained on vectors from word embeddings only: SYNTAX25 and WIKIGW25. These networks achieve modest τ values around 10, which should not be surprising: they use very general vector representations and have no access to word or n-gram overlap or to length information, which are very important features to compute similarity against the reference. However, as will be discussed below, their contribution is complementary to the four previous evaluation metrics and will lead to significant improvements in combination with them. Section III of Table 1 shows the results for neural networks that combine the four metrics from 4METRICS with SYNTAX25 and WIKI-GW25. We can see that just combining the four metrics in a flat neural net (i.e., no hidden layer), which is equivalent to a logistic regression, yields a τ of 27.06, which is better than the best of the four metrics by 3.5 points absolute, and also better by over 1.5 points absolute than the best metric that participated at the WMT12 metrics task competition (SPEDE07PP with τ = 25.4). Indeed, 4METRICS is a strong mix that involves not only simple lexical overlap but also approximate matching, paraphrases, edit distance, lengths, etc. Yet, adding to 4METRICS the embedding vectors yields sizeable further improvements: +1.5 and +2.0 points absolute when adding SYNTAX25 and WIKI-GW25, respectively. Finally, adding both yields even further improvements close to τ of 30 (+2.64 τ points), showing that lexical semantics and syntactic representations are complementary. Section IV of Table 1 puts these numbers in perspective: it lists the τ for the top three systems that participated at WMT12, whose scores ranged between 22.9 and 25.4. 809 System Details Kendall’s τ I 4METRICS: commonly-used individual metrics cz de es fr AVG BLEU no learning 15.88 18.56 18.57 20.83 18.46 NIST no learning 19.66 23.09 20.41 22.21 21.34 TER no learning 17.80 25.31 22.86 21.05 21.75 METEOR no learning 20.82 26.79 23.81 22.93 23.59 II NN using embedding vectors: syntactic & semantic SYNTAX25 multi-layer NN 8.00 13.03 12.11 7.42 10.14 WIKI-GW25 multi-layer NN 14.31 11.49 9.24 4.99 10.01 III NN using 4METRICS+ embedding vectors 4METRICS logistic regression 23.46 29.95 27.49 27.36 27.06 4METRICS+SYNTAX25 multi-layer NN 26.09 30.58 29.30 28.07 28.51 4METRICS+WIKI-GW25 multi-layer NN 25.67 32.50 29.21 28.92 29.07 4METRICS+SYNTAX25+WIKI-GW25 multi-layer NN 26.30 33.19 30.38 28.92 29.70 IV Comparison to previous results on WMT12 DiscoTK (Joty et al., 2014) Best on the WMT12 dataset na na na na 30.5 SPEDE07PP 1st at the WMT12 competition 21.2 27.8 26.5 26.0 25.4 METEOR∗ 2nd at WMT12 the competition 21.2 27.5 24.9 25.1 24.7 (Guzm´an et al., 2014a) Preference kernel approach 23.1 25.8 22.6 23.2 23.7 AMBER 3rd at the WMT12 competition 19.1 24.8 23.1 24.5 22.9 Table 1: Kendall’s tau (τ) on the WMT12 dataset for various metrics. Notes: (i) the version of METEOR that took part in the WMT12 competition (marked with ∗in section IV of the table) is different from the one used in our experiments (section I of the table), (ii) values marked as na were not reported by the authors. We can see that 4METRICS is much stronger than the winner at WMT12, and thus arguably a baseline hard to improve upon. While our results are slightly behind those of DiscoTK (Joty et al., 2014), we should note that we only combine four metrics, plus the vectors, while DiscoTK combines over 20 metrics, many of which are costly to compute. On the other hand, we work in a ranking framework, i.e., we are not interested in producing an absolute score, but in making pairwise decisions only. Mapping these pairwise decisions into an absolute score is challenging and in our experiments it leads to a slight drop in τ (results omitted here to save space). The only other result on WMT12 by authors working with our pairwise framework is our own previous work (Guzm´an et al., 2014a), where we used a preference kernel approach to combine syntactic and discourse trees with lexical information; as we can see, our earlier results are 6 absolute points lower than those we achieve here. Moreover, our NN approach offers advantages over SVMs in terms of computational cost. Based on these results, we can conclude that word embeddings, whether syntactic or semantic, offer generalizations that efficiently complement very strong metric combinations, and thus should be considered when designing future MT evaluation metrics. 6 Discussion In this section, we explore how different parts of our framework can be modified to improve its performance, or how it can be extended for further generalization. First, we explore variations of the feature sets from the perspective of both the pairwise features and the embeddings. Then, we analyze the role of the network architecture and of the cost function used for learning. 6.1 Fine-Grained Pairwise Features We have shown that our NN can integrate syntactic and semantic vectors with scores from other metrics. In fact, ours is a more general framework, where one can integrate the components of a metric instead of its score, which could yield better learning. Below, we demonstrate this for BLEU. BLEU has different components: the n-gram precisions, the n-gram matches, the total number of n-grams (n=1,2,3,4), the lengths of the hypotheses and of the reference, the length ratio between them, and BLEU’s brevity penalty. We will refer to this decomposed BLEU as BLEUCOMP. Some of these features were previously used in SIMPBLEU (Song and Cohn, 2011). The results of using the components of BLEUCOMP as features are shown in Table 2. We see that using a single-layer neural network, which is equivalent to logistic regression, outperforms BLEU by more than +1 τ points absolute. 810 Kendall’s τ System Details cz de es fr AVG BLEU no learning 15.88 18.56 18.57 20.83 18.46 BLEUCOMP logistic regression 18.18 21.13 19.79 19.91 19.75 BLEUCOMP+SYNTAX25 multi-layer NN 20.75 25.32 24.85 23.88 23.70 BLEUCOMP+WIKI-GW25 multi-layer NN 22.96 26.63 25.99 24.10 24.92 BLEUCOMP+SYNTAX25+WIKI-GW25 multi-layer NN 22.84 28.92 27.95 24.90 26.15 BLEU+SYNTAX25+WIKI-GW25 multi-layer NN 20.03 25.95 27.07 23.16 24.05 Table 2: Kendall’s τ on WMT12 for neural networks using BLEUCOMP, a decomposed version of BLEU. For comparison, the last line shows a combination using BLEU instead of BLEUCOMP. Source Alone Comb. WIKI-GW25 10.01 29.70 WIKI-GW300 9.66 29.90 CC-300-42B 12.16 29.68 CC-300-840B 11.41 29.88 WORD2VEC300 7.72 29.13 COMPOSES400 12.35 28.54 Table 3: Average Kendall’s τ on WMT12 for semantic vectors trained on different text collections. Shown are results (i) when using the semantic vectors alone, and (ii) when combining them with 4METRICS and SYNTAX25. The improvements over WIKI-GW25 are marked in bold. As before, adding SYNTAX25 and WIKIGW25 improves the results, but now by a more sizable margin: +4 for the former and +5 for the latter. Adding both yields +6.5 improvement over BLEUCOMP, and almost 8 points over BLEU. We see once again that the syntactic and semantic word embeddings are complementary to the information sources used by metrics such as BLEU, and that our framework can learn from richer pairwise feature sets such as BLEUCOMP. 6.2 Larger Semantic Vectors One interesting aspect to explore is the effect of the dimensionality of the input embeddings. Here, we studied the impact of using semantic vectors of bigger sizes, trained on different and larger text collections. The results are shown in Table 3. We can see that, compared to the 50-dimensional WIKI-GW25, 300-400 dimensional vectors are generally better by 1-2 τ points absolute when used in isolation; however, when used in combination with 4METRICS+SYNTAX25, they do not offer much gain (up to +0.2), and in some cases, we observe a slight drop in performance. We suspect that the variability across the different collections is due to a domain mismatch. Yet, we defer this question for future work. Kendall’s τ Details cz de es fr AVG single-layer 25.86 32.06 30.03 28.45 29.10 multi-layer 26.30 33.19 30.38 28.92 29.70 Table 4: Kendall’s tau (τ) on the WMT12 dataset for alternative architectures using 4METRICS+SYNTAX25+WIKIGW25 as input. 6.3 Deep vs. Flat Neural Network One interesting question is how much of the learning is due to the rich input representations, and how much happens because of the architecture of the neural network. To answer this, we experimented with two settings: a single-layer neural network, where all input features are fed directly to the output layer (which is logistic regression), and our proposed multi-layer neural network. The results are shown in Table 4. We can see that switching from our multi-layer architecture to a single-layer one yields an absolute drop of 0.6 τ. This suggests that there is value in using the deeper, pairwise layer architecture. 6.4 Task-Specific Cost Function Another question is whether the log-likelihood cost function J(θ) (see Section 3.3) is the most appropriate for our ranking task, provided that it is evaluated using Kendall’s τ as defined below: τ = concord. −disc. −ties concord + disc. + ties (5) where concord., disc. and ties are the number of concordant, disconcordant and tied pairs. Given an input tuple (t1, t2, r), the logistic cost function yields larger values of σ = f(t1, t2, r) if y = 1, and smaller if y = 0, where 0 ≤σ ≤1 is the parameter of the Bernoulli distribution. However, it does not model directly the probability when the order of the hypotheses in the tuple is reversed, i.e., σ′ = f(t2, t1, r). 811 Kendall’s τ Details cz de es fr AVG Logistic 26.30 33.19 30.38 28.92 29.70 Kendall 27.04 33.60 29.48 28.54 29.53 Log.+Ken. 26.90 33.17 30.40 29.21 29.92 Table 5: Kendall’s tau (τ) on WMT12 for alternative cost functions using 4METRICS+SYNTAX25+WIKI-GW25. For our specific task, given an input tuple (t1, t2, r), we want to make sure that the difference between the two output activations ∆= σ −σ′ is positive when y = 1, and negative when y = 0. Ensuring this would take us closer to the actual objective, which is Kendall’s τ. One possible way to do this is to introduce a task-specific cost function that penalizes the disagreements similarly to the way Kendall’s τ does.4 In particular, we define a new Kendall cost as follows: Jθ = − X n yn sig(−γ∆n) + (1 −yn) sig(γ∆n) (6) where we use the sigmoid function sig as a differentiable approximation to the step function. The above cost function penalizes disconcordances, i.e., cases where (i) y = 1 but ∆< 0, or (ii) when y = 0 but ∆> 0. However, we also need to make sure that we discourage ties. We do so by adding a zero-mean Gaussian regularization term exp(−β∆2/2) that penalizes the value of ∆ getting close to zero. Note that the specific values for γ and β are not really important, as long as they are large. In particular, in our experiments, we used γ = β = 100. Table 5 shows a comparison of the two cost functions: (i) the standard logistic cost, and (ii) our Kendall cost. We can see that using the Kendall cost enables effective learning, although it is eventually outperformed by the logistic cost. Our investigation revealed that this was due to a combination of slower convergence and poor initialization. Therefore, we further experimented with a setup where we first used the logistic cost to pretrain the neural network, and then we switched to the Kendall cost in order to perform some finer tuning. As we can see in Table 5 (last row), doing so yielded a sizable improvement over using the Kendall cost only; it also improved over using the logistic cost only. 4Other variations for ranking tasks are possible, e.g., (Yih et al., 2011). 7 Conclusions and Future Work We have presented a novel framework for learning a tunable MT evaluation metric in a pairwise ranking setting, given pre-existing pairwise human preference judgments. In particular, we used a neural network, where the input layer encodes lexical, syntactic and semantic information from the reference and the two translation hypotheses, which is efficiently compacted into relatively small embeddings. The network has a hidden layer, motivated by our intuition about the problem, which captures the interactions between the relevant input components. Unlike previously proposed kernel-based approaches, our framework allows us to do both training and inference efficiently. Moreover, we have shown that it can be trained to optimize a task-specific cost function, which is more appropriate for the pairwise MT evaluation setting. The evaluation results have shown that our NN model yields state-of-the-art results when using lexical, syntactic and semantic features (the latter two based on compact embeddings). Moreover, we have shown that the contribution of the different information sources is additive, thus demonstrating that the framework can effectively integrate complementary information. Furthermore, the framework is flexible enough to exploit different granularities of features such as n-gram matches and other components of BLEU (which individually work better than using the aggregated BLEU score). Finally, we have presented evidence suggesting that using the pairwise hidden layers is advantageous over simpler flat models. In future work, we would like to experiment with an extension that allows for multiple references. We further plan to incorporate features from the source sentence. We believe that our framework can support learning similarities between the two translations and the source, for an improved MT evaluation. Variations of this architecture might be useful for related tasks such as Quality Estimation and hypothesis re-ranking for Machine Translation, where no references are available. Other aspects worth studying as a complement to the present work include (i) the impact of the quality of the syntactic analysis (translations are often just a “word salad”), (ii) differences across language pairs, and (iii) the relevance of the domain the semantic representations are trained on. 812 References Joshua Albrecht and Rebecca Hwa. 2008. Regression for machine translation evaluation at the sentence level. Machine Translation, 22(1-2):1–27. Marco Baroni, Georgiana Dinu, and Germ´an Kruszewski. 2014. Don’t count, predict! A systematic comparison of context-counting vs. context-predicting semantic vectors. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL ’14, pages 238–247, Baltimore, Maryland, USA. Yoshua Bengio and Xavier Glorot. 2010. Understanding the difficulty of training deep feedforward neural networks. In Proceedings of AI & Statistics 2010, volume 9, pages 249–256, Chia Laguna Resort, Sardinia, Italy. Yoshua Bengio, R´ejean Ducharme, Pascal Vincent, and Christian Janvin. 2003. A neural probabilistic language model. Journal of Machine Learning Research, 3:1137–1155. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference, SciPy ’10, Austin, Texas. Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, Radu Soricut, Lucia Specia, and Aleˇs Tamchyna. 2014. Findings of the 2014 workshop on statistical machine translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT ’14, pages 12–58, Baltimore, Maryland, USA. Chris Callison-Burch, Cameron Fordyce, Philipp Koehn, Christof Monz, and Josh Schroeder. 2007. (Meta-) evaluation of machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, WMT ’07, pages 136–158, Prague, Czech Republic. Chris Callison-Burch, Philipp Koehn, Christof Monz, and Omar Zaidan. 2011. Findings of the 2011 workshop on statistical machine translation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT ’11, pages 22–64, Edinburgh, Scotland. Chris Callison-Burch, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2012. Findings of the 2012 Workshop on Statistical Machine Translation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, WMT ’12, pages 10–51, Montr´eal, Canada. Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language processing (almost) from scratch. Journal of Machine Learning Research, 12:2493–2537. Elisabet Comelles, Jes´us Gim´enez, Llu´ıs M`arquez, Irene Castell´on, and Victoria Arranz. 2010. Document-level automatic MT evaluation based on discourse representations. In Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, WMT ’10, pages 333– 338, Uppsala, Sweden. Michael Denkowski and Alon Lavie. 2011. Meteor 1.3: Automatic metric for reliable optimization and evaluation of machine translation systems. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT ’11, pages 85–91, Edinburgh, Scotland. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL ’14, pages 1370–1380, Baltimore, Maryland, USA. George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the Second International Conference on Human Language Technology Research, HLT ’02, pages 138–145, San Francisco, California, USA. Morgan Kaufmann Publishers. John Duchi, Elad Hazan, and Yoram Singer. 2011. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12:2121–2159. Kevin Duh. 2008. Ranking vs. regression in machine translation evaluation. In Proceedings of the Third Workshop on Statistical Machine Translation, WMT ’08, pages 191–194, Columbus, Ohio, USA. Jes´us Gim´enez and Llu´ıs M`arquez. 2007. Linguistic features for automatic evaluation of heterogenous MT systems. In Proceedings of the Second Workshop on Statistical Machine Translation, WMT ’07, pages 256–264, Prague, Czech Republic. Francisco Guzm´an, Shafiq Joty, Llu´ıs M`arquez, Alessandro Moschitti, Preslav Nakov, and Massimo Nicosia. 2014a. Learning to differentiate better from worse translations. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP ’14, pages 214–220, Doha, Qatar. Francisco Guzm´an, Shafiq Joty, Llu´ıs M`arquez, and Preslav Nakov. 2014b. Using discourse structure improves machine translation evaluation. In Proceedings of 52nd Annual Meeting of the Association for Computational Linguistics, ACL ’14, pages 687– 698, Baltimore, Maryland, USA. 813 Shafiq Joty, Francisco Guzm´an, Llu´ıs M`arquez, and Preslav Nakov. 2014. DiscoTK: Using discourse structure for machine translation evaluation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT ’14, pages 402–408, Baltimore, Maryland, USA. Alex Kulesza and Stuart M. Shieber. 2004. A learning approach to improving sentence-level MT evaluation. In Proceedings of the 10th International Conference on Theoretical and Methodological Issues in Machine Translation. Alon Lavie and Michael Denkowski. 2009. The METEOR metric for automatic evaluation of machine translation. Machine Translation, 23(2–3):105–115. Ding Liu and Daniel Gildea. 2005. Syntactic features for evaluation of machine translation. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization, pages 25–32, Ann Arbor, Michigan, USA. Chi-kiu Lo, Anand Karthik Tumuluru, and Dekai Wu. 2012. Fully automatic semantic MT evaluation. In Proceedings of the Seventh Workshop on Statistical Machine Translation, WMT ’12, pages 243–252, Montr´eal, Canada. Matouˇs Mach´aˇcek and Ondˇrej Bojar. 2013. Results of the WMT13 metrics shared task. In Proceedings of the Eighth Workshop on Statistical Machine Translation, WMT ’13, pages 45–51, Sofia, Bulgaria. Matouˇs Mach´aˇcek and Ondˇrej Bojar. 2014. Results of the WMT14 metrics shared task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, WMT ’14, pages 293–301, Baltimore, Maryland, USA. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock´y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In 11th Annual Conference of the International Speech Communication Association, pages 1045– 1048, Makuhari, Chiba, Japan. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013a. Distributed representations of words and phrases and their compositionality. In Advances in Neural Information Processing Systems 26, NIPS ’13, pages 3111–3119. Lake Tahoe, California, USA. Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT ’13, pages 746–751, Atlanta, Georgia, USA. Jeff Mitchell and Mirella Lapata. 2010. Composition in distributional models of semantics. Cognitive Science, 34(8):1388–1439. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proceedings of 40th Annual Meting of the Association for Computational Linguistics, ACL ’02, pages 311–318, Philadelphia, Pennsylvania, USA. Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’14, pages 1532–1543, Doha, Qatar. Maja Popovi´c and Hermann Ney. 2007. Word error rates: Decomposition over POS classes and applications for error analysis. In Proceedings of the Second Workshop on Statistical Machine Translation, WMT ’07, pages 48–55, Prague, Czech Republic. Matthew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of the 7th Biennial Conference of the Association for Machine Translation in the Americas, AMTA ’06, Cambridge, Massachusetts, USA. Richard Socher, John Bauer, Christopher D. Manning, and Ng Andrew Y. 2013a. Parsing with compositional vector grammars. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), ACL ’13, pages 455–465, Sofia, Bulgaria. Richard Socher, Alex Perelygin, Jean Wu, Jason Chuang, Christopher D. Manning, Andrew Ng, and Christopher Potts. 2013b. Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP ’13, pages 1631–1642, Seattle, Washington, USA. Xingyi Song and Trevor Cohn. 2011. Regression and ranking based optimisation for sentence-level MT evaluation. In Proceedings of the Sixth Workshop on Statistical Machine Translation, WMT ’11, pages 123–129, Edinburgh, Scotland. Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural networks. In Proceedings of the Neural Information Processing Systems, NIPS ’14, Montreal, Canada. Billy Wong and Chunyu Kit. 2012. Extending machine translation evaluation metrics with lexical cohesion to document level. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL ’12, pages 1060–1068, Jeju Island, Korea. Wen-tau Yih, Kristina Toutanova, John C. Platt, and Christopher Meek. 2011. Learning discriminative projections for text similarity measures. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, CoNLL ’11, pages 247–256, Portland, Oregon, USA. 814
2015
78
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 815–824, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics String-to-Tree Multi Bottom-up Tree Transducers Nina Seemann and Fabienne Braune and Andreas Maletti Institute for Natural Language Processing, University of Stuttgart Pfaffenwaldring 5b, 70569 Stuttgart, Germany {seemanna,braunefe,maletti}@ims.uni-stuttgart.de Abstract We achieve significant improvements in several syntax-based machine translation experiments using a string-to-tree variant of multi bottom-up tree transducers. Our new parameterized rule extraction algorithm extracts string-to-tree rules that can be discontiguous and non-minimal in contrast to existing algorithms for the tree-to-tree setting. The obtained models significantly outperform the string-to-tree component of the Moses framework in a large-scale empirical evaluation on several known translation tasks. Our linguistic analysis reveals the remarkable benefits of discontiguous and non-minimal rules. 1 Introduction We present an application of a variant of local multi bottom-up tree transducers (ℓMBOTs) as proposed in Maletti (2011) to statistical machine translation. ℓMBOTs allow discontinuities on the target language side since they have a sequence of target tree fragments instead of a single tree fragment in their rules. The original approach makes use of syntactic information on both the source and the target side (tree-to-tree) and a corresponding minimal rule extraction is presented in (Maletti, 2011). Braune et al. (2013) implemented it as well as a decoder inside the Moses framework (Koehn et al., 2007) and demonstrated that the resulting tree-to-tree ℓMBOT system significantly improved over its tree-to-tree baseline using minimal rules. We can see at least two drawbacks in this approach. First, experiments investigating the integration of syntactic information on both sides generally report quality deterioration. For example, Lavie et al. (2008), Liu et al. (2009), and Chiang (2010) noted that translation quality tends to decrease in tree-to-tree systems because the rules become too restrictive. Second, minimal rules (i.e., rules that cannot be obtained from other extracted rules) typically consist of a few lexical items only and are thus not the most suitable to translate idiomatic expressions and other fixed phrases. To overcome these drawbacks, we abolish the syntactic information for the source side and develop a string-to-tree variant of ℓMBOTs. In addition, we develop a new rule extraction algorithm that can also extract non-minimal rules. In general, the number of extractable rules explodes, so our rule extraction places parameterized restrictions on the extracted rules in the same spirit as in (Chiang, 2007). In this manner, we combine the advantages of the hierarchical phrase-based approach on the source side and the tree-based approach with discontinuiety on the target side. We evaluate our new system in 3 large-scale experiments using translation tasks, in which we expect discontinuiety on the target. MBOTs are powerful but asymmetric models since discontinuiety is available only on the target. We chose to translate from English to German, Arabic, and Chinese. In all experiments our new system significantly outperforms the string-to-tree syntax-based component (Hoang et al., 2009) of Moses. The (potentially) discontiguous rules of our model are very useful in these setups, which we confirm in a quantitative and qualitative analysis. 2 Related work Modern statistical machine translation systems (Koehn, 2009) are based on different translation models. Syntax-based systems have become widely used because of their ability to handle non-local reordering and other linguistic phenomena better than phrase-based models (Och and Ney, 2004). Synchronous tree substitution grammars (STSGs) of Eisner (2003) use a single source and target tree fragment per rule. In contrast, an ℓMBOT rule contains a single source tree 815 concludes X →  VAFIN ist , NP , VP PP geschlossen  X on X →  NP , PP ¨uber NN  human rights →  NN Menschenrechte  the X →  NP die NN  Figure 1: Several valid rules for our MBOT. fragment and a sequence of target tree fragments. ℓMBOTs can also be understood as a restriction of the non-contiguous STSSGs of Sun et al. (2009), which allow a sequence of source tree fragments and a sequence of target tree fragments. ℓMBOT rules require exactly one source tree fragment. While the mentioned syntax-based models use tree fragments for source and target (tree-to-tree), Galley et al. (2004) and Galley et al. (2006) use syntactic annotations only on the target language side (string-to-tree). Further research by DeNeefe et al. (2007) revealed that adding non-minimal rules improves translation quality in this setting. Here we improve statistical machine translation in this setting even further using non-minimal ℓMBOT rules. 3 Theoretical Model As our translation model, we use a string-to-tree variant of the shallow local multi bottom-up tree transducer of Braune et al. (2013). We will call our variant MBOT for simplicity. Our MBOT is a synchronous grammar (Chiang, 2006) similar to a synchronous context-free grammar (SCFG), but instead of a single source and target fragment per rule, our rules are of the form s →(t1, . . . , tn) with a single source string s and potentially several target tree fragments t1, . . . , tn. Besides lexical items the source string can contain (several occurrences of) the placeholder X, which links to non-lexical leaves in the target tree fragments. In contrast to an SCFG each placeholder can have several such links. However, each non-lexical leaf in a target tree fragment has exactly one such link to a placeholder X. An MBOT is simply a finite collection of such rules. Several valid rules are depicted in Figure 1. The sentential forms of our MBOTs, which occur during derivations, have exactly the same shape as our rules and each rule is a sentential Matching sentential forms (underlining for emphasis): concludes X →  VAFIN ist , NP , VP PP geschlossen  X on X →  NP , PP ¨uber NN  Combined sentential form: concludes X on X →  VAFIN ist , NP , VP PP ¨uber NN geschlossen  Figure 2: Substitution of sentential forms. form. We can combine sentential forms with the help of substitution (Chiang, 2006). Roughly speaking, in a sentential form ξ we can replace a placeholder X that is linked (left-to-right) to non-lexical leaves C1, . . . , Ck in the target tree fragments by the source string of any sentential form ζ, whose roots of the target tree fragments (left-to-right) read C1, . . . , Ck. The target tree fragments of ζ will replace the respective linked leaves in the target tree fragments of the sentential form ξ. In other words, substitution has to respect the symbols in the linked target tree fragments and all linked leaves are replaced at the same time. We illustrate substitution in Figure 2, where we replace the placeholder X in the source string, which is linked to the underlined leaves NP and PP in the target tree fragments. The rule below (also in Figure 1) is also a sentential form and matches since its (underlined) root labels of the target tree fragments read “NP PP”. Thus, we can substitute the latter sentential form into the former and obtain the sentential form shown at the bottom of Figure 2. Ideally, the substitution process is repeated until the complete source sentence is derived. 4 Rule Extraction The rule extraction of Maletti (2011) extracts minimal tree-to-tree rules, which are rules containing both source and target tree fragments, from sentence pairs of a word-aligned and bi-parsed parallel corpus. In particular, this requires parses for both the source and the target language sentences which adds a source for errors and specificity potentially leading to lower translation performance and lower coverage (Wellington et al., 2006). Chiang (2010) showed that string-to-tree systems— 816 that1 concludes2 the3 debate4 on5 human6 rights7 TOP[1,7] PROAV[1,1] damit1 VAFIN[2,2] ist2 NP[3,4] ART[3,3] die3 NN[4,4] Aussprache4 VP[5,7] PP[5,6] APPR[5,5] ¨uber5 NN[6,6] Menschenrechte6 VVPP[7,7] geschlossen7 Figure 3: Word-aligned sentence pair with targetside parse. which he calls fuzzy tree-to-tree-systems— generally yield higher translation quality compared to corresponding tree-to-tree systems. For efficiency reasons the rule extraction of Maletti (2011) only extracts minimal rules, which are the smallest tree fragments compatible with the given word alignment and the parse trees. Similarly, non-minimal rules are those that can be obtained from minimal rules by substitution. In particular, each lexical item of a sentence pair occurs in exactly one minimal rule extracted from that sentence pair. However, minimal rules are especially unsuitable for fixed phrases consisting of rare words because minimal rules encourage small fragments and thus word-by-word translation. Consequently, such fixed phrases will often be assembled inconsistently by substitution from small fragments. Non-minimal rules encourage a consistent translation by covering larger parts of the source sentence. Here we want to develop an efficient rule extraction procedure for our string-to-tree MBOTs that avoids the mentioned drawbacks. Naturally, we could substitute minimal rules into each other to obtain non-minimal rules, but performing substitution for all combinations is clearly intractable. Instead we essentially follow the approach of Koehn et al. (2003), Och and Ney (2004), and Chiang (2007), which is based on consistently aligned phrase pairs. Our training corpus contains word-aligned sentence pairs ⟨e, A, f⟩, which contain a source language sentence e, a target language sentence f, and an alignment A ⊆[1, ℓe] × [1, ℓf], where ℓe and ℓf are the lengths of the sentences e and f, respectively, and [i, i′] = {j ∈Z | i ≤j ≤i′} is the span (closed interval of integers) from i to i′ for all positive integers i ≤i′. Rules are extracted for each pair of the corpus, so in the following let ⟨e, A, f⟩be a word-aligned sentence pair. A source phrase is simply a span [i, i′] ⊆[1, ℓe] and correspondingly, a target phrase is a span [j, j′] ⊆[1, ℓf]. A rule span is a pair ⟨p, ϕ⟩consisting of a source phrase p and a sequence ϕ = p1 · · · pn of (nonoverlapping) target phrases p1, . . . , pn. Spans overlap if their intersection is non-empty. If n = 1 (i.e., there is exactly one target phrase in ϕ) then ⟨p, ϕ⟩is also a phrase pair (Koehn et al., 2003). We want to emphasize that formally phrases are spans and not the substrings occuring at that span. Next, we lift the notion of consistently aligned phrase pairs to our rule spans. Simply put, for a consistently aligned rule span ⟨p, p1 · · · pn⟩we require that it respects the alignment A in the sense that the origin i of an alignment (i, j) ∈A is covered by p if and only if the destination j is covered by p1, . . . , pn. Formally, the rule span ⟨p, p1 · · · pn⟩is consistently aligned if for every (i, j) ∈ A we have i ∈ p if and only if j ∈Sn k=1 pk. For example, given the word-aligned sentence pair in Figure 3, the rule span ⟨[2, 4], [2, 4] [7, 7]⟩is consistently aligned, whereas the phrase pair ⟨[2, 4], [2, 7]⟩is not. Our MBOTs use rules consisting of a source string and a sequence of target tree fragments. The target trees are provided by a parser for the target language. For each word-aligned sentence pair ⟨e, A, f⟩we thus have a parse tree t for f. An example is provided in Figure 3. We omit a formal definition of trees, but recall that each node η of the parse tree t governs a (unique) target phrase. In Figure 3 we have indicated those target phrases (spans) as subscript to the non-lexical node labels. A consistently aligned rule span ⟨p, p1 · · · pn⟩of ⟨e, A, f⟩is compatible with t if there exist nodes η1, . . . , ηn of t such that ηk governs pk for all 1 ≤k ≤n. For example, given the word-aligned sentence pair and parse tree t in Figure 3, the consistently aligned rule span ⟨[2, 4], [2, 4] [7, 7]⟩is not compatible with t because there is no node in t that governs [2, 4]. However, for the same data, the rule span ⟨[2, 4], [2, 2] [3, 4] [7, 7]⟩is consistently aligned and compatible with t. The required nodes of t are labeled VAFIN, NP, VVPP. Now we are ready to start the rule extraction. For each consistently aligned rule span ⟨p, p1 · · · pn⟩that is compatible with t and each selection of nodes η1, . . . , ηn of t such that nk governs pk for each 1 ≤k ≤n, we can extract the rule e(p) → flat(tη1), . . . , flat(tηn)  , where 817 Initial rules for rule span ⟨[3, 3], [3, 3]⟩: the →  ART die  rule span ⟨[4, 4], [4, 4]⟩: debate →  NN Aussprache  rule span ⟨[3, 4], [3, 4]⟩: the debate →  NP die Aussprache  rule span ⟨[5, 7], [5, 6]⟩: on human rights →  PP ¨uber Menschenrechte  rule span ⟨[3, 7], [3, 4] [5, 6]⟩: the debate on human rights →  NP die Aussprache , PP ¨uber Menschenrechte  rule span ⟨[2, 2], [2, 2] [7, 7]⟩: concludes →  VAFIN ist , VVPP geschlossen  rule span ⟨[2, 4], [2, 2] [3, 4] [7, 7]⟩: concludes the debate →  VAFIN ist , NP die Aussprache , VVPP geschlossen  rule span ⟨[2, 7], [2, 7]⟩: concludes the debate on human rights →  VAFIN ist , NP die Aussprache , VP ¨uber Menschenrechte geschlossen  Figure 4: Some initial rules extracted from the word-aligned sentence pair and parse of Figure 3. • e(p) is the substring of e at span p,1 • flat(u) removes all internal nodes from u (all nodes except the root and the leaves), and • tη is the subtree rooted in η for node η of t. The rules obtained in this manner are called initial rules for ⟨e, A, f⟩and t. For example, for the rule span ⟨[2, 4], [2, 2] [3, 4] [7, 7]⟩we can extract only one initial rule. More precisely, we have • e([2, 4]) = concludes the debate • tη1 = (VAFIN ist) • tη2 = NP (ART die) (NN Aussprache)  , • and tη3 = (VVPP geschlossen). The function flat leaves tη1 and tη3 unchanged, but flat(tη2) = (NP die Aussprache). Thus, we obtain the boxed rule of Figure 4. Clearly, the initial rules are just the start because they are completely lexical in the sense that they never contain the placeholder X in the source string nor a non-lexical leaf in any output tree fragment. We introduce non-lexical rules using the same approach as for the hierarchical rules of Chiang (2007). Roughly speaking, we obtain a new rule r′′ by “excising” an initial rule r from another rule r′ and replacing the removed part by • the placeholder X in the source string, • the root label of the removed tree fragment in the target tree fragments, and • linking the removed parts appropriately, so that the flatted substitution of r into r′′ can 1If p = [i, i′], then e(p) = e[i, i′] is the substring of e ranging from the i-th token to the i′-th token. Extractable rule [top] and initial rule [bottom]: the debate on human rights →  NP die Aussprache , PP ¨uber Menschenrechte  on human rights →  PP ¨uber Menschenrechte  Extractable rule obtained after excision: the debate X →  NP die Aussprache , PP  Figure 5: Excision of the middle initial rule from the topmost initial rule. Substituting the middle rule into the result yields the topmost rule. yield r′. This “excision” process is illustrated in Figure 5, where we remove the middle initial rule from the topmost initial rule. The result is displayed at the bottom in Figure 5. Formally, the set of extractable rules R for a given word-aligned sentence pair ⟨e, A, f⟩with parse tree t for f is the smallest set subject to the following two conditions: • Each initial rule is in R and thus extractable. • For every initial rule r and extractable rule r′ ∈R, any flat rule r′′, into which we can substitute r to obtain ρ with flat(ρ) = r′, is in R and thus extractable.2 For our running example depicted in Figure 3 we display some extractable rules in Figure 6. 2A rule ρ = s →(t1, . . . , tn) is flat if flat(ρ) = ρ, where flat(ρ) = s →(flat(t1), . . . , flat(tn)). 818 Source string “the debate”: concludes X on human rights →  VAFIN ist , NP , VP ¨uber Menschenrechte geschlossen  Source string “on human rights”: concludes the debate X →  VAFIN ist , NP die Aussprache , VP PP geschlossen  Source string “the debate on human rights”: concludes X →  VAFIN ist , NP , VP PP geschlossen  Figure 6: Extractable rules obtained by excising various initial rules (see Figure 4) from the initial rule displayed at the bottom of Figure 4. Unfortunately, already Chiang (2007) points out that the set of all extractable rules is generally too large and keeping all extractable rules leads to slow training, slow decoding, and spurious ambiguity. Our MBOT rules are restricted by the parse tree for the target sentence, but the MBOT model permits additional flexibility due to the presence of multiple target tree fragments. Overall, we experience the same problems, and consequently, in the experiments we use the following additional constraints on rules s →(t1, . . . , tn): (a) We only consider source phrases p of length at most 10 (i.e., i′ −i < 10 for p = [i, i′]).3 (b) The source string s contains at most 5 occurrences of lexical items or X (i.e. ℓs ≤5). (c) The source string s cannot have consecutive Xs (i.e., XX is not a substring of s). (d) The source string contains at least one lexical item that was aligned in ⟨e, A, f⟩. (e) The left-most token of the source string s cannot be X (i.e., s[1, 1] ̸= X). Our implementation can easily be modified to handle other constraints. Figure 7 shows extractable rules violating those additional constraints. Table 1 gives an overview on how many rules are extracted. Our string-to-tree variant extracts 12–17 times more rules than the minimal tree-totree rule extraction. For our experiments (see Section 6), we filter all rule tables on the given input. The decoding times for the minimal ℓMBOT and our MBOT share the same order of magnitude. 5 Model Features For each source language sentence e, we want to determine its most likely translation ˆf given by ˆf = arg maxf p(f | e) = arg maxf p(e | f) · p(f) 3Note that this restricts the set of initial rules. for some unknown probability distributions p. We estimate p(e | f)·p(f) by a log-linear combination of features hi(·) with weights λi scored on sentential forms e →(t) of our extracted MBOT M such that the leaves of t read (left-to-right) f. We use the decoder provided by MBOT-Moses of Braune et al. (2013) and its standard features, which includes all the common features (Koehn, 2009) and a gap penalty 1001−c, where c is the number of target tree fragments that contributed to t. This feature discourages rules with many target tree fragments. As usual, all features are obtained as the product of the corresponding rule features for the rules used to derive e →(t) by means of substitution. The rule weights for the translation weights are obtained as relative frequencies normalized over all rules with the same right- and left-hand side. Good-Turing smoothing (Good, 1953) is applied to all rules that were extracted at most 10 times. The lexical translation weights are obtained as usual. 6 Experimental Results We considered three reasonable baselines: (i) minimal ℓMBOT, (ii) non-contiguous STSSG (Sun et al., 2009), or (iii) a string-to-tree Moses system. We decided against the minimal ℓMBOT as a baseline since tree-to-tree systems generally get lower BLEU scores than string-to-tree systems. We nevertheless present its BLEU scores (see Table 3). Unfortunately, we could not compare to Sun et al. (2009) because their decoder and rule extraction algorithms are not publicly available. Furthermore, we have the impression that their system does not scale well: • Only around 240,000 training sentences were used. Our training data contains between 1.8M and 5.7M sentence pairs. • The development and test set were length819 violates (b): that concludes X on human rights →  PROAV damit , VAFIN ist , NP , VP ¨uber Menschenrechte geschlossen  violates (c): concludes X X →  VAFIN ist , NP , VP PP geschlossen  violates (d): X →  NP  violates (e): X on human rights →  NP , PP ¨uber Menschenrechte  Figure 7: Showing extractable rules violating the restrictions. System number of extracted rules English-To-German English-To-Arabic English-To-Chinese minimal tree-to-tree ℓMBOT 12,478,160 28,725,229 10,162,325 non-minimal string-to-tree MBOT 143,661,376 491,307,787 162,240,663 string-to-tree Moses 14,092,729 55,169,043 17,047,570 Table 1: Overview of numbers of extracted rules with respect to the different extraction algorithms. ratio filtered to sentences up to 50 characters. We do not modify those sets. • Only rules with at most one gap were allowed which would be equivalent to restrict the number of target tree fragments to 2 in our system. Hence we decided to use a string-to-tree Moses system as baseline (see Section 6.1). 6.1 Setup As a baseline system for our experiments we use the syntax-based component (Hoang et al., 2009) of the Moses toolkit (Koehn et al., 2007). Our system is the presented translation system based on MBOTs. We use the MBOT-Moses decoder (Braune et al., 2013) which – similar to the baseline decoder – uses a CYK+ chart parsing algorithm using a standard X-style parse tree which is sped up by cube pruning (Chiang, 2007) with integrated language model scoring. Our and the baseline system use linguistic syntactic annotation (parses) only on the target side (string-to-tree). During rule extraction we impose the restrictions of Section 4. Additional glue-rules that concatenate partial translations without performing any reordering are used in all systems. For all experiments (English-to-German, English-to-Arabic, and English-to-Chinese), the training data was length-ratio filtered. The word alignments were generated by GIZA++ (Och and Ney, 2003) with the grow-diag-final-and heuristic (Koehn et al., 2005). The following language-specific processing was performed. The German text was true-cased and the functional and morphological annotations were removed from the parse. The Arabic text was tokenized with MADA (Habash et al., 2009) and transliterated according to Buckwalter (2002). Finally, the Chinese text was word-segmented using the Stanford Word Segmenter (Chang et al., 2008). In all experiments the feature weights λi of the log-linear model were trained using minimum error rate training (Och, 2003). The remaining information for the experiments is presented in Table 2. 6.2 Quantitative Analysis The overall translation quality was measured with 4-gram BLEU (Papineni et al., 2002) on truecased data for German, on transliterated data for Arabic, and on word-segmented data for Chinese. Significance was computed with Gimpel’s implementation (Gimpel, 2011) of pairwise bootstrap resampling with 1,000 samples. Table 3 lists the evaluation results. In all three setups the MBOT system significantly outperforms the baseline. For German we obtain a BLEU score of 15.90 which is a gain of 0.68 points. For Arabic we get an increase of 0.78 points which results in 49.10 BLEU. For Chinese we obtain a score of 18.35 BLEU gaining 0.66 points.4 We also trained a vanilla phrase-based system for each language pair on the same data as described in Table 2. To demonstrate the usefulness of the multiple 4NIST-08 also shows BLEU for word-segmented output (http://www.itl.nist.gov/iad/mig/tests/ mt/2008/doc/mt08_official_results_v0. html). Best constrained system: 17.69 BLEU; best unconstrained system: 19.63 BLEU. 820 English to German English to Arabic English to Chinese training data 7th EuroParl corpus (Koehn, 2005) MultiUN corpus (Eisele and Chen, 2010) training data size ≈1.8M sentence pairs ≈5.7M sentence pairs ≈1.9M sentence pairs target-side parser BitPar (Schmid, 2004) Berkeley parser (Petrov et al., 2006) language model 5-gram SRILM (Stolcke, 2002) add. LM data WMT 2013 Arabic in MultiUN Chinese in MultiUN LM data size ≈57M sentences ≈9.7M sentences ≈9.5M sentences tuning data WMT 2013 cut from MultiUN NIST 2002, 2003, 2005 tuning size 3,000 sentences 2,000 sentences 2,879 sentences test data WMT 2013 (Bojar et al., 2013) cut from MultiUN NIST 2008 (NIST, 2010) test size 3,000 sentences 1,000 sentences 1,859 sentences Table 2: Summary of the performed experiments. Language pair System BLEU English-to-German Moses Baseline 15.22 MBOT ∗15.90 minimal ℓMBOT 14.09 Phrase-based Moses 16.73 English-to-Arabic Moses Baseline 48.32 MBOT ∗49.10 minimal ℓMBOT 32.88 Phrase-based Moses 50.27 English-to-Chinese Moses Baseline 17.69 MBOT ∗18.35 minimal ℓMBOT 12.01 Phrase-based Moses 18.09 Table 3: Evaluation results. The starred results are statistically significant improvements over the baseline (at confidence p < 1%). target tree fragments of MBOTs, we analyzed the MBOT rules that were used when decoding the test set. We distinguish several types of rules. A rule is contiguous if it has only 1 target tree fragment. All other rules are (potentially) discontiguous. Moreover, lexical rules are rules whose leaves are exclusively lexical items. All other rules (i.e., those that contain at least one non-lexical leaf) are structural. Table 4 reports how many rules of each type are used during decoding for both our MBOT system and the minimal ℓMBOT. Below, we focus on analyzing our MBOT system. Out of the rules used for German, 27% were (potentially) discontiguous and 5% were structural. For Arabic, we observe 67% discontiguous rules and 26% structural rules. For translating into Chinese 30% discontiguous rules were used and the structural rules account to 18%. These numbers show that the usage of discontiguous rules tunes to the specific language pair. For instance, Arabic utilizes them more compared to German and Chinese. Furthermore, German uses a lot of lexical rules which is probably due to the fact that it is a morphologically rich language. On the other hand, Arabic and Chinese make good use of structural rules. In addition, Table 4 presents a finer-grained analysis based on the number of target tree fragments. Only rules with at most 8 target tree fragments were used. While German and Arabic seem to require some rules with 6 target tree fragments, Chinese probably does not. We conclude that the number of target tree fragments can be restricted to a language-pair specific number during rule extraction. 6.3 Qualitative Analysis In this section, we inspect some English-toGerman translations generated by the Moses baseline and our MBOT system in order to provide some evidence for linguistic constructions that our system handles better. We identified (a) the realization of reflexive pronouns, relative pronouns, and particle verbs, (b) the realization of verbal material, and (c) local and long distance reordering to be better throughout than in the baseline system. All examples are (parts of) translations of sentences from the test data. Ungrammatical constructions are enclosed in brackets and marked with a star. We focus on instances that seem relevant to the new ability to use non-minimal rules. We start with an example showing the realization of a reflexive pronoun. Source: Bitcoin differs from other types of virtual currency. Reference: Bitcoin unterscheidet sich von anderen Arten virtueller W¨ahrungen. Baseline: Bitcoin [unterscheidet]⋆von anderen Arten [der virtuellen W¨ahrung]⋆. 821 Target tree fragments Language pair System Type Lex Struct Total 2 3 4 5 ≥6 English-to-German our cont. 27,351 635 27,986 MBOT discont. 9,336 1,110 10,446 5,565 3,441 1,076 312 52 minimal cont. 55,910 4,492 60,402 ℓMBOT discont. 2,167 7,386 9,553 6,458 2,589 471 34 1 English-to-Arabic our cont. 1,839 651 2,490 MBOT discont. 3,670 1,324 4,994 3,008 1,269 528 153 36 minimal cont. 18,389 2,855 21,244 ℓMBOT discont. 1,138 1,920 3,058 2,525 455 67 8 3 English-to-Chinese our cont. 17,135 1,585 18,720 MBOT discont. 4,822 3,341 8,163 6,411 1,448 247 55 2 minimal cont. 34,275 8,820 43,095 ℓMBOT discont. 516 4,292 4,808 3,816 900 82 6 4 Table 4: Number of rules per type used when decoding test (Lex = lexical rules; Struct = structural rules; [dis]cont. = [dis]contiguous). MBOT: Bitcoin unterscheidet sich von anderen Arten [der virtuellen W¨ahrung]⋆. Here the baseline drops the reflexive pronoun sich, which is correctly realized by the MBOT system. The rule used is displayed in Figure 8. differs from other →  VVFIN unterscheidet , PRF sich , APPR von , ADJA anderen  Figure 8: Rule realizing the reflexive pronoun. Next, we show a translation in which our system correctly generates a whole verbal segment. Source: It turned out that not only ... Reference: Es stellte sich heraus, dass nicht nur .. . Baseline: [Heraus,]⋆nicht nur ... MBOT: Es stellte sich heraus, dass nicht nur ... The baseline drops the verbal construction whereas the large non-minimal rule of Figure 9 allows our MBOT to avoid that drop. Again, the required reflexive pronoun sich is realized as well as the necessary comma before the conjunction dass. It turned out that →  PPER Es , VVFIN stellte , PRF sich , PTKZU heraus , $, , , KOUS dass  Figure 9: MBOT rule for the verbal segment. Another feature of MBOT is its power to perform long distance reordering with the help of several discontiguous output fragments. Source: . . . weapons factories now, which do not endure competition on the international market and . . . Reference: . . . R¨ustungsfabriken, die der internationalen Konkurrenz nicht standhalten und . . . Baseline: . . . [Waffen in den Fabriken nun]⋆, die nicht einem Wettbewerb auf dem internationalen Markt []⋆und . . . MBOT: . . . [Waffen Fabriken nun]⋆, die Konkurrenz auf dem internationalen Markt nicht ertragen und . . . Figure 10 shows the rules which enable the MBOT system to produce the correct reordering. which do not X →  PRELS die , NP NP , PTKNEG nicht , VP VP  endure X →  NP NP , VP ertragen  competition X →  NP Konkurrenz PP  on the international market →  PP auf dem internationalen Markt  Figure 10: Long distance reordering. 7 Conclusion We present an application of a string-to-tree variant of local multi bottom-up tree transducers, which are tree-to-tree models, to statistical machine translation. Originally, only minimal rules were extracted, but to overcome the typically lower translation quality of tree-to-tree systems and minimal rules, we abolish the syntactic annotation on the source side and develop a stringto-tree variant. In addition, we present a new pa822 rameterized rule extraction that can extract nonminimal rules, which are particularly helpful for translating fixed phrases. It would be interesting to know how much can be gained when using only one contribution at a time. Hence, we will explore the impact of string-to-tree and non-minimal rules in isolation. We demonstrate that our new system significantly outperforms the standard Moses string-totree system on three different large-scale translation tasks (English-to-German, English-to-Arabic, and English-to-Chinese) with a gain between 0.53 and 0.87 BLEU points. An analysis of the rules used to decode the test sets suggests that the usage of discontiguous rules is tuned to each language pair. Furthermore, it shows that only discontiguous rules with at most 8 target tree fragments are used. Thus, further research could investigate a hard limit on the number of target tree fragments during rule extraction. We also perform a manual inspection of the obtained translations and confirm that our string-to-tree MBOT rules can adequately handle discontiguous phrases, which occur frequently in German, Arabic, and Chinese. Other languages that exhibit such phenomena include Czech, Dutch, Russian, and Polish. Thus, we hope that our approach can also be applied successfully to other language pairs. To support further experimentation by the community, we publicly release our developed software and complete tool-chain (http://www.ims.uni-stuttgart.de/ forschung/ressourcen/werkzeuge/ mbotmoses.html). Acknowledgement The authors would like to express their gratitude to the reviewers for their helpful comments and Robin Kurtz for preparing the Arabic corpus. All authors were financially supported by the German Research Foundation (DFG) grant MA 4959 / 1-1. References Ondˇrej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proc. 8th WMT, pages 1–44. Association for Computational Linguistics. Fabienne Braune, Nina Seemann, Daniel Quernheim, and Andreas Maletti. 2013. Shallow local multi bottom-up tree transducers in statistical machine translation. In Proc. 51st ACL, pages 811–821. Association for Computational Linguistics. Timothy Buckwalter. 2002. Arabic transliteration. http://www.qamus.org/ transliteration.htm. Pi-Chuan Chang, Michel Galley, and Christopher D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In Proc. 3rd WMT, pages 224–232. Association for Computational Linguistics. David Chiang. 2006. An introduction to synchronous grammars. In Proc. 44th ACL. Association for Computational Linguistics. Part of a tutorial given with Kevin Knight. David Chiang. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201–228. David Chiang. 2010. Learning to translate with source and target syntax. In Proc. 48th ACL, pages 1443– 1452. Association for Computational Linguistics. Steve DeNeefe, Kevin Knight, Wei Wang, and Daniel Marcu. 2007. What can syntax-based MT learn from phrase-based MT? In Proc. 2007 EMNLP, pages 755–763. Association for Computational Linguistics. Andreas Eisele and Yu Chen. 2010. MultiUN: A multilingual corpus from United Nation documents. In Proc. 7th LREC, pages 2868–2872. European Language Resources Association. Jason Eisner. 2003. Learning non-isomorphic tree mappings for machine translation. In Proc. 41st ACL, pages 205–208. Association for Computational Linguistics. Michel Galley, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What’s in a translation rule? In Proc. 2004 NAACL, pages 273–280. Association for Computational Linguistics. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proc. 44th ACL, pages 961–968. Association for Computational Linguistics. Kevin Gimpel. 2011. Code for statistical significance testing for MT evaluation metrics. http://www. ark.cs.cmu.edu/MT/. Irving J. Good. 1953. The population frequencies of species and the estimation of population parameters. Biometrika, 40(3–4):237–264. 823 Nizar Habash, Owen Rambow, and Ryan Roth. 2009. MADA+TOKAN: A toolkit for Arabic tokenization, diacritization, morphological disambiguation, POS tagging, stemming and lemmatization. In Proc. 2nd MEDAR, pages 102–109. Association for Computational Linguistics. Hieu Hoang, Philipp Koehn, and Adam Lopez. 2009. A unified framework for phrase-based, hierarchical, and syntax-based statistical machine translation. In Proc. 6th IWSLT, pages 152–159. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proc. 2003 NAACL, pages 48–54. Association for Computational Linguistics. Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh system description for the 2005 IWSLT Speech Translation Evaluation. In Proc. 2nd IWSLT, pages 68–75. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proc. 45th ACL, pages 177–180. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Proc. 10th MT Summit, pages 79–86. Association for Machine Translation in the Americas. Philipp Koehn. 2009. Statistical Machine Translation. Cambridge University Press. Alon Lavie, Alok Parlikar, and Vamshi Ambati. 2008. Syntax-driven learning of sub-sentential translation equivalents and translation rules from parsed parallel corpora. In Proc. 2nd SSST, pages 87–95. Association for Computational Linguistics. Yang Liu, Yajuan L¨u, and Qun Liu. 2009. Improving tree-to-tree translation with packed forests. In Proc. 47th ACL, pages 558–566. Association for Computational Linguistics. Andreas Maletti. 2011. How to train your multi bottom-up tree transducer. In Proc. 49th ACL, pages 825–834. Association for Computational Linguistics. NIST. 2010. NIST 2002 [2003, 2005, 2008] open machine translation evaluation. Linguistic Data Consortium. LDC2010T10 [T11, T14, T21]. Franz J. Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1):19–51. Franz J. Och and Hermann Ney. 2004. The alignment template approach to statistical machine translation. Computational Linguistics, 30(4):417–449. Franz J. Och. 2003. Minimum error rate training in statistical machine translation. In Proc. 41st ACL, pages 160–167. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and Wei jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Proc. 40th ACL, pages 311–318. Association for Computational Linguistics. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proc. 44th ACL, pages 433–440. Association for Computational Linguistics. Helmut Schmid. 2004. Efficient parsing of highly ambiguous context-free grammars with bit vectors. In Proc. 20th COLING, pages 162–168. Association for Computational Linguistics. Andreas Stolcke. 2002. SRILM — an extensible language modeling toolkit. In Proc. 7th INTERSPEECH, pages 257–286. Jun Sun, Min Zhang, and Chew Lim Tan. 2009. A noncontiguous tree sequence alignment-based model for statistical machine translation. In Proc. 47th ACL, pages 914–922. Association for Computational Linguistics. Benjamin Wellington, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical lower bounds on the complexity of translational equivalence. In Proc. 44th ACL, pages 977–984. Association for Computational Linguistics. 824
2015
79
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 74–83, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Weakly Supervised Models of Aspect-Sentiment for Online Course Discussion Forums Arti Ramesh,1 Shachi H. Kumar,2 James Foulds,2 Lise Getoor2 1University of Maryland, College Park 2University of California, Santa Cruz [email protected], {shulluma, jfoulds, getoor}@ucsc.edu Abstract Massive open online courses (MOOCs) are redefining the education system and transcending boundaries posed by traditional courses. With the increase in popularity of online courses, there is a corresponding increase in the need to understand and interpret the communications of the course participants. Identifying topics or aspects of conversation and inferring sentiment in online course forum posts can enable instructor interventions to meet the needs of the students, rapidly address course-related issues, and increase student retention. Labeled aspect-sentiment data for MOOCs are expensive to obtain and may not be transferable between courses, suggesting the need for approaches that do not require labeled data. We develop a weakly supervised joint model for aspectsentiment in online courses, modeling the dependencies between various aspects and sentiment using a recently developed scalable class of statistical relational models called hinge-loss Markov random fields. We validate our models on posts sampled from twelve online courses, each containing an average of 10,000 posts, and demonstrate that jointly modeling aspect with sentiment improves the prediction accuracy for both aspect and sentiment. 1 Introduction Massive Open Online Courses (MOOCs) have emerged as a powerful medium for imparting education to a wide geographical population. Discussion forums are the primary means of communication between MOOC participants (students, TAs, and instructors). Due to the open nature of these courses, they attract people from all over the world leading to large numbers of participants and hence, large numbers of posts in the discussion forums. In the courses we worked with, we found that over the course of the class there were typically over 10,000 posts. Within this slew of posts, there are valuable problem-reporting posts that identify issues such as broken links, audio-visual glitches, and inaccuracies in the course materials. Automatically identifying these reported problems is important for several reasons: i) it is time-consuming for instructors to manually screen through all of the posts due to the highly skewed instructor-tostudent ratio in MOOCs, ii) promptly addressing issues could help improve student retention, and iii) future iterations of the course could benefit from identifying technical and logistical issues currently faced by students. In this paper, we investigate the problem of determining the fine-grained topics of posts (which we refer to as “MOOC aspects”) and the sentiment toward them, which can potentially be used to improve the course. While aspect-sentiment has been widely studied, the MOOC discussion forum scenario presents a unique set of challenges. Labeled data are expensive to obtain, and posts containing finegrained aspects occur infrequently in courses and differ across courses, thereby making it expensive to get sufficient coverage of all labels. Few distinct aspects occur per course, and only 5-10% of posts in a course are relevant. Hence, getting labels for fine-grained labels involves mining and annotating posts from a large number of courses. Further, creating and sharing labeled data is difficult as data from online courses is governed by IRB regula74 tions. Privacy restrictions are another reason why unsupervised/weakly-supervised methods can be helpful. Lastly, to design a system capable of identifying all possible MOOC aspects across courses, we need to develop a system that is not fine-tuned to any particular course, but can adapt seamlessly across courses. To this end, we develop a weakly supervised system for detecting aspect and sentiment in MOOC forum posts and validate its effectiveness on posts sampled from twelve MOOC courses. Our system can be applied to any MOOC discussion forum with no or minimal modifications. Our contributions in this paper are as follows: • We show how to encode weak supervision in the form of seed words to extract extract course-specific features in MOOCs using SeededLDA, a seeded variation of topic modeling (Jagarlamudi et al., 2012). • Building upon our SeededLDA approach, we develop a joint model for aspects and sentiment using the hinge-loss Markov random field (HL-MRF) probabilistic modeling framework. This framework is especially well-suited for this problem because of its ability to combine information from multiple features and jointly reason about aspect and sentiment. • To validate the effectiveness of our system, we construct a labeled evaluation dataset by sampling posts from twelve MOOC courses, and annotating these posts with fine-grained MOOC aspects and sentiment via crowdsourcing. The annotation captures finegrained aspects of the course such as content, grading, deadlines, audio and video of lectures and sentiment (i.e., positive, negative, and neutral) toward the aspect in the post. • We demonstrate that the proposed HL-MRF model can predict fine-grained aspects and sentiment and outperforms the model based only on SeededLDA. 2 Related Work To the best of our knowledge, the problem of predicting aspect and sentiment in MOOC forums has not yet been addressed in the literature. We review prior work in related areas here. Aspect-Sentiment in Online Reviews It is valuable to identify the sentiment of online reviews towards aspects such as hotel cleanliness and cellphone screen brightness, and sentiment analysis at the aspect-level has been studied extensively in this context (Liu and Zhang, 2012). Several of these methods use latent Dirichlet allocation topic models (Blei et al., 2003) and variants of it for detecting aspect and sentiment (Lu et al., 2011; Lin and He, 2009). Liu and Zhang (2012) provide a comprehensive survey of techniques for aspect and sentiment analysis. Here, we discuss works that are closely related to ours. Titov and McDonald (2008) emphasize the importance of an unsupervised approach for aspect detection. However, the authors also indicate that standard LDA (Blei et al., 2003) methods capture global topics and not necessarily pertinent aspects — a challenge that we address in this work. Brody and Elhadad (2010), Titov and McDonald (2008), and Jo and Oh (2011) apply variations of LDA at the sentence level for online reviews. We find that around 90% of MOOC posts have only one aspect, which makes sentence-level aspect modeling inappropriate for our domain. Most previous approaches for sentiment rely on manually constructed lexicons of strongly positive and negative words (Fahrni and Klenner, 2008; Brody and Elhadad, 2010). These methods are effective in an online review context, however sentiment in MOOC forum posts is often implicit, and not necessarily indicated by standard lexicons. For example, the post “Where is my certificate? Waiting over a month for it.” expresses negative sentiment toward the certificate aspect, but does not include any typical negative sentiment words. In our work, we use a data-driven model-based approach to discover domain-specific lexicon information guided by small sets of seed words. There has also been substantial work on joint models for aspect and sentiment (Kim et al., 2013; Diao et al., 2014; Zhao et al., 2010; Lin et al., 2012), and we adopt such an approach in this paper. Kim et al. (2013) use a hierarchical aspectsentiment model and evaluate it for online reviews. Mukherjee and Liu (2012) use seed words for discovering aspect-based sentiment topics. Drawing on the ideas of Mukherjee and Liu (2012) and Kim et al. (2013), we propose a statistical relational learning approach that combines the advantages of seed words, aspect hierarchy, and flat 75 Post 1: I have not received the midterm. Post 2: No lecture subtitles week, will they be uploaded? Post 3: I am ... and I am looking forward to learn more ... Table 1: Example posts from MOOC forums. Aspect words are highlighted in bold. aspect-sentiment relationships. It is important to note that a broad majority of the previous work on aspect sentiment focuses on the specific challenges of online review data. As discussed in detail above, MOOC forum data have substantially different properties, and our approach is the first to be designed particularly for this domain. Learning Analytics In another line of research, there is a growing body of work on the analysis of online courses. Regarding MOOC forum data, Stump et al. (2013) propose a framework for taxonomically categorizing forum posts, leveraging manual annotations. We differ from their approach in that we develop an automatic system to predict MOOC forum categories without using labeled training data. Ramesh et al. (2014b) categorize forum posts into three broad categories in order to predict student engagement. Unlike this method, our system is capable of fine-grained categorization and of identifying aspects in MOOCS. Chaturvedi et al. (2014) focus on predicting instructor intervention using lexicon features and thread features. In contrast, our system is capable of predicting fine MOOC aspects and sentiment of discussion forum posts and thus provides a more informed analysis of MOOC posts. 3 Problem Setting and Data MOOC participants primarily communicate through discussion forums, consisting of posts, which are short pieces of text. Table 1 provides examples of posts in MOOC forums. Posts 1 and 2 report issues and feedback for the course, while post 3 is a social interaction message. Our goal is to distinguish problem-reporting posts such as 1 and 2 from social posts such as 3, and to identify the issues that are being discussed. We formalize this task as an aspect-sentiment prediction problem (Liu and Zhang, 2012). The issues reported in MOOC forums can be related to the different elements of the course such as lectures and quizzes, which are referred to as aspects. The aspects are selected based on MOOC domain expertise and inspiration from Stump et al. (2013), aiming to cover common concerns that could benefit from intervention. The task is to predict these COARSE-ASPECT FINE-ASPECT Description # of posts LECTURE LECTURE-CONTENT Content of lectures. 559 LECTURE-VIDEO Video of lectures. 215 LECTURE-SUBTITLES Subtitles of lecture. 149 LECTURE-AUDIO Audio of lecture. 136 LECTURE-LECTURER Delivery of instructor. 69 QUIZ QUIZ-CONTENT Content in quizzes. 439 QUIZ-GRADING Grading of quizzes. 360 QUIZ-SUBMISSION Quiz submission. 329 QUIZ-DEADLINE Deadline of quizzes. 142 CERTIFICATE Course certificates. 194 SOCIAL Social interaction posts. 1187 Table 2: Descriptions of coarse and fine aspects. aspects for each post, along with the sentiment polarity toward the aspect, which we code as positive, negative, or neutral. The negative-sentiment posts, along with their aspects, allow us to identify potentially correctable issues in the course. As labels are expensive in this scenario, we formulate the task as a weakly supervised prediction problem. In our work, we assume that a post has at most one fine-grained aspect, as we found that this was true for 90% of the posts in our data. This property is due in part to the brevity of forum posts, which are much shorter documents than those considered in other aspect-sentiment scenarios such as product reviews. 3.1 Aspect Hierarchy While we do not require labeled data, our approaches allow the analyst to instead relatively easily encode a small amount of domain knowledge by seeding the models with a few words relating to each aspect of interest. Hence, we refer to our approach as weakly supervised. Our models can further make use of hierarchical structure between the aspects. The proposed approach is flexible, allowing the aspect seeds and hierarchy to be selected for a given MOOC domain. For the purposes of this study, we represent the MOOC aspects with a two-level hierarchy. We identify a list of nine fine-grained aspects, which are grouped into four coarse topics. The coarse aspects consist of LECTURE, QUIZ, CERTIFICATE, and SOCIAL topics. Table 2 provides a description of each of the aspects and also gives the number of posts in each aspect category after annotation. As both LECTURE and QUIZ are key coarselevel aspects in online courses, and more nuanced aspect information for these is important to facilitate instructor interventions, we identify fine-grained aspects for these coarse aspects. 76 For LECTURE we identify LECTURE-CONTENT, LECTURE-VIDEO, LECTURE-AUDIO, LECTURESUBTITLES, and LECTURE-LECTURER as fine aspects. For QUIZ, we identify the fine aspects QUIZ-CONTENT, QUIZ-GRADING, QUIZDEADLINES, and QUIZ-SUBMISSION. We use the label SOCIAL to refer to social interaction posts that do not mention a problem-related aspect. 3.2 Dataset We construct a dataset by sampling posts from MOOC courses to capture the variety of aspects discussed in online courses. We include courses from different disciplines (business, technology, history, and the sciences) to ensure broad coverage of aspects. Although we adopt an approach that does not require labeled data for training, which is important for most practical MOOC scenarios, in order to validate our methods we obtain labels for the sampled posts using Crowdflower,1 an online crowd-sourcing annotation platform. Each post was annotated by at least 3 annotators. Crowdflower calculates confidence in labels by computing trust scores for annotators using test questions. Kolhatkar et al. (2013) provide a detailed analysis of Crowdflower trust calculations and the relationship to inter-annotator agreement. We follow their recommendations and retain only labels with confidence > 0.5. 4 Aspect-Sentiment Prediction Models In this section, we develop models and featureextraction techniques to address the challenges of aspect-sentiment prediction for MOOC forums. We present two weakly-supervised methods— first, using a seeded topic modeling approach (Jagarlamudi et al., 2012) to identify aspects and sentiment. Second, building upon this method, we then introduce a more powerful statistical relational model which reasons over the seeded LDA predictions as well as sentiment side-information to encode hierarchy information and correlations between sentiment and aspect. 4.1 Seeded LDA Model Topic models (Blei et al., 2003), which identify latent semantic themes from text corpora, have previously been successfully used to discover aspects for sentiment analysis (Diao et al., 2014). By equating the topics, i.e. discrete distributions over 1http://www.crowdflower.com/ words, with aspects and/or sentiment polarities, topic models can recover aspect-sentiment predictions. In the MOOC context we are specifically interested in problems with the courses, rather than general topics which may be identified by a topic model, such as the topics of the course material. To guide the topic model to identify aspects of interest, we use SeededLDA (Jagarlamudi et al., 2012), a variant of LDA which allows an analyst to “seed” topics by providing key words that should belong to the topics. We construct SeededLDA models by providing a set of seed words for each of the coarse and fine aspects in the aspect hierarchy of Table 2. We also seed topics for positive, negative and neutral sentiment polarities. The seed words for coarse topics are provided in Table 3, and fine aspects in Table 4. For the sentiment topics (Table 5), the seed words for the topic positive are positive words often found in online courses such as thank, congratulations, learn, and interest. Similarly, the seed words for the negative topic are negative in the context of online courses, such as difficult, error, issue, problem, and misunderstand. Additionally, we also use SeededLDA for isolating some common problems in online courses that are associated with sentiment, such as difficulty, availability, correctness, and coursespecific seed words from the syllabus as described in Table 6. Finally, having inferred the SeededLDA model from the data set, for each post p we predict the most likely aspect and the most likely sentiment polarity according to the post’s inferred distribution over topics θ(p). In our experiments, we tokenize and stem the posts using NLTK toolkit (Loper and Bird, 2002), and use a stop word list tuned to online course discussion forums. The topic model Dirichlet hyperparameters are set to α = 0.01, β = 0.01 in our experiments. For SeededLDA models corresponding to the seed sets in Tables 3, 4, and 5, the number of topics is equal to the number of seeded topics. For SeededLDA models corresponding to the seed words in Tables 6 and 3, we use 10 topics, allowing for some unseeded topics that are not captured by the seed words. 4.2 Hinge-loss Markov Random Fields The approach described in the previous section automatically identifies user-seeded aspects and sentiment, but it does not make further use of struc77 LECTURE: lectur, video, download, volum, low, headphon, sound, audio, transcript, subtitl, slide, note QUIZ: quiz, assignment, question, midterm,exam, submiss, answer, grade, score, grad, midterm, due, deadlin CERTIFICATE: certif, score, signatur, statement, final, course, pass, receiv, coursera, accomplish, fail SOCIAL: name, course, introduction, stud, group, everyon, student Table 3: Seed words for coarse aspects LECTURE-VIDEO: video, problem, download, play, player, watch, speed, length, long, fast, slow, render, qualiti LECTURE-AUDIO: volum, low, headphon, sound, audio, hear, maximum, troubl, qualiti, high, loud, heard LECTURE-LECTURER: professor, fast, speak, pace, follow, speed, slow, accent, absorb, quick, slowli LECTURE-SUBTITLES: transcript, subtitl, slide, note, lectur, difficult, pdf LECTURE-CONTENT: typo, error, mistak, wrong, right, incorrect, mistaken QUIZ-CONTENT: question, challeng, difficult, understand, typo, error, mistak, quiz, assignment QUIZ-SUBMISSION: submiss, submit, quiz, error, unabl, resubmit QUIZ-GRADING: answer, question, answer, grade, assignment, quiz, respons ,mark, wrong, score QUIZ-DEADLINE: due, deadlin, miss, extend, late Table 4: Seed words for fine aspects POSITIVE: interest, excit, thank, great, happi, glad, enjoy, forward, insight, opportun, clear, fantast, fascin, learn, hope, congratul NEGATIVE: problem, difficult, error, issu, unabl, misunderstand, terribl, bother, hate, bad, wrong, mistak, fear, troubl NEUTRAL: coursera, class, hello, everyon, greet, nam, meet, group, studi, request, join, introduct, question, thank Table 5: Seed words for sentiment DIFFICULTY: difficult, understand, ambigu, disappoint, hard, follow, mislead, difficulti, challeng, clear CONTENT: typo, error, mistak, wrong, right, incorrect, mistaken, score AVAILABILITY: avail, nowher, find, access, miss, view, download, broken, link, bad, access, deni, miss, permiss COURSE-1: develop, eclips, sdk, softwar, hardware, accuser, html, platform, environ, lab, ide, java, COURSE-2: protein, food, gene, vitamin, evolut, sequenc, chromosom, genet, speci, peopl, popul, evolv, mutat, ancestri COURSE-3: compani, product, industri, strategi, decision, disrupt, technolog, market Table 6: Seed words for sentiment specific to online courses ture or dependencies between these values, or any additional side-information. To address this, we propose a more powerful approach using hingeloss Markov random fields (HL-MRFs), a scalable class of continuous, conditional graphical models (Bach et al., 2013). HL-MRFs have achieved state-of-the-art performance in many domains including knowledge graph identification (Pujara et al., 2013), understanding engagements in MOOCs (Ramesh et al., 2014a), biomedicine and multirelational link prediction (Fakhraei et al., 2014), and modelling social trust (Huang et al., 2013). These models can be specified using Probabilistic Soft Logic (PSL) (Bach et al., 2015), a weighted first order logical templating language. An example of a PSL rule is λ : P(a) ∧Q(a, b) →R(b), where P, Q, and R are predicates, a and b are variables, and λ is the weight associated with the rule. The weight of the rule indicates its importance in the HL-MRF probabilistic model, which defines a probability density function of the form P(Y|X) ∝exp  − M X r=1 λrφr(Y, X)  φr(Y, X) = (max{lr(Y, X), 0})ρr , (1) where φr(Y, X) is a hinge-loss potential corresponding to an instantiation of a rule, and is specified by a linear function lr and optional exponent ρr ∈{1, 2}. For example, in our MOOC aspectsentiment model, if P and F denote post P and fine aspect F, then we have predicates SEEDLDAFINE(P, F) to denote the value corresponding to topic F in SeededLDA, and FINE-ASPECT(P, F) is the target variable denoting the fine aspect of the post P. A PSL rule to encode that the SeededLDA topic F suggests that aspect F is present is λ : SEEDLDA-FINE(P, F) →FINE-ASPECT(P, F). We can generate more complex rules connecting the different features and target variables, e.g. λ : SEEDLDA-FINE(P, F) ∧SENTIMENT(P, S) →FINE-ASPECT(P, F). This rule encodes a dependency between SENTIMENT and FINE-ASPECT, namely that the Seed78 edLDA topic and a strong sentiment score increase the probability of the fine aspect. The HL-MRF model uses these rules to encode domain knowledge about dependencies among the predicates. The continuous value representation further helps in understanding the confidence of predictions. 4.3 Joint Aspect-Sentiment Prediction using Probabilistic Soft Logic (PSL-Joint) In this section, we describe our joint approach to predicting aspect and sentiment in online discussion forums, leveraging the strong dependence between aspect and sentiment. We present a system designed using HL-MRFs which combines different features, accounting for their respective uncertainty, and encodes the dependencies between aspect and sentiment in the MOOC context. Table 7 provides some representative rules from our model.2 The rules can be classified into two broad categories—1) rules that combine multiple features, and 2) rules that encode the dependencies between aspect and sentiment. 4.3.1 Combining Features The first set of rules in Table 7 combine different features extracted from the post. SEEDLDA-FINE, SEEDLDA-COARSE and SEEDLDA-SENTIMENTCOURSE predicates in rules refer to SeededLDA posterior distributions using coarse, fine, and course-specific sentiment seed words respectively. The strength of our model comes from its ability to encode different combinations of features and weight them according to their importance. The first rule in Table 7 combines the SeededLDA features from both SEEDLDA-FINE and SEEDLDACOARSE to predict the fine aspect. Interpreting the rule, the fine aspect of the post is more likely to be LECTURE-LECTURER if the coarse SeededLDA score for the post is LECTURE, and the fine SeededLDA score for the post is LECTURELECTURER. Similarly, the second rule provides combinations of some of the other features used by the model—two different SeededLDA scores for sentiment, as indicated by seed words in Tables 5 and 6. The third rule states that certain fine aspects occur together with certain values of sentiment more than others. In online courses, posts that discuss grading usually talk about grievances and issues. The rule captures that QUIZ-GRADING occurs with negative sentiment in most cases. 2Full model available at https://github.com/artir/ramesh-acl15 4.3.2 Encoding Dependencies Between Aspect and Sentiment In addition to combining features, we also encode rules to capture the taxonomic dependence between coarse and fine aspects, and the dependence between aspect and sentiment (Table 7, bottom). Rules 4 and 5 encode pair-wise dependency between FINE-ASPECT and SENTIMENT, and COARSE-ASPECT and FINE-ASPECT respectively. Rule 4 uses the SeededLDA value for QUIZ-DEADLINES to predict both SENTIMENT, and FINE-ASPECT jointly. This together with other rules for predicting SENTIMENT and FINEASPECT individually creates a constrained satisfaction problem, forcing aspect and sentiment to agree with each other. Rule 5 is similar to rule 4, capturing the taxonomic relationship between target variables COARSE-ASPECT and FINE-ASPECT. Thus, by using conjunctions to combine features and appropriately weighting these rules, we account for the uncertainties in the underlying features and make them more robust. The combination of these two different types of weighted rules, referred to below as PSL-Joint, is able to reason collectively about aspect and sentiment. 5 Empirical Evaluation In this section, we present the quantitative and qualitative results of our models on the annotated MOOC dataset. Our models do not require labeled data for training; we use the label annotations only for evaluation. Tables 8 – 11 show the results for the SeededLDA and PSL-Joint models. Statistically significant differences, evaluated using a paired t-test with a rejection threshold of 0.01, are typed in bold. 5.1 SeededLDA for Aspect-Sentiment For SeededLDA, we use the seed words for coarse, fine, and sentiment given in Tables 3 – 5. After training the model, we use the SeededLDA multinomial posterior distribution to predict the target variables. We use the maximum value in the posterior for the distribution over topics for each post to obtain predictions for coarse aspect, fine aspect, and sentiment. We then calculate precision, recall and F1 values comparing with our ground truth labels. 79 PSL-JOINT RULES Rules combining features SEEDLDA-FINE(POST, LECTURE-LECTURER) ∧SEEDLDA-COARSE(POST, LECTURE) →FINE-ASPECT(POST, LECTURE-LECTURER) SEEDLDA-SENTIMENT-COURSE(POST, NEGATIVE) ∧SEEDLDA-SENTIMENT(POST, NEGATIVE) →SENTIMENT(POST, NEGATIVE) SEEDLDA-SENTIMENT-COURSE(POST, NEGATIVE) ∧SEEDLDA-FINE(POST, QUIZ-GRADING) →FINE-ASPECT(POST, QUIZ-GRADING) Encoding dependencies between aspect and sentiment SEEDLDA-FINE(POST, QUIZ-DEADLINES) ∧SENTIMENT(POST, NEGATIVE) →FINE-ASPECT(POST, QUIZ-DEADLINES) SEEDLDA-FINE(POST, QUIZ-SUBMISSION) ∧FINE-ASPECT(POST, QUIZ-SUBMISSION) →COARSE-ASPECT(POST, QUIZ) Table 7: Representative rules from PSL-Joint model Model LECTURE-CONTENT LECTURE-VIDEO LECTURE-AUDIO LECTURE-LECTURER LECTURE-SUBTITLES Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 SEEDEDLDA 0.137 0.057 0.08 0.156 0.256 0.240 0.684 0.684 0.684 0.037 0.159 0.06 0.289 0.631 0.397 PSL-JOINT 0.407 0.413 0.410 0.411 0.591 0.485 0.635 0.537 0.582 0.218 0.623 0.323 0.407 0.53 0.461 Table 8: Precision, recall and F1 scores for LECTURE fine aspects Model QUIZ-CONTENT QUIZ-SUBMISSION QUIZ-DEADLINES QUIZ-GRADING Prec Rec. F1 Prec Rec. F1 Prec. Rec. F1 Prec. Rec. F1 SEEDEDLDA 0.042 0.006 0.011 0.485 0.398 0.437 0.444 0.141 0.214 0.524 0.508 0.514 PSL-JOINT 0.324 0.405 0.36 0.521 0.347 0.416 0.667 0.563 0.611 0.572 0.531 0.550 Table 9: Precision, recall and F1 scores for QUIZ fine aspects Model LECTURE QUIZ CERTIFICATE SOCIAL Prec Rec. F1 Prec. Rec. F1 Prec. Rec. F1 Prec. Rec. F1 SEEDEDLDA 0.597 0.673 0.632 0.752 0.583 0.657 0.315 0.845 0.459 0.902 0.513 0.654 PSL-JOINT 0.563 0.715 0.630 0.724 0.688 0.706 0.552 0.711 0.621 0.871 0.530 0.659 Table 10: Precision, recall and F1 scores for coarse aspects Model POSITIVE NEGATIVE NEUTRAL Prec Rec. F1 Prec. Rec. F1 Prec. Rec. F1 SEEDEDLDA 0.104 0.721 0.182 0.650 0.429 0.517 0.483 0.282 0.356 PSL-JOINT 0.114 0.544 0.189 0.571 0.666 0.615 0.664 0.322 0.434 Table 11: Precision, recall and F1 scores for sentiment 5.2 PSL for Joint Aspect-Sentiment (PSL-Joint) Tables 8 and 9 give the results for the fine aspects under LECTURE and QUIZ. PSL-JOINT performs better than SEEDEDLDA in most cases, without suffering any statistically significant losses. Notable cases include the increase in scores for LECTURE-LECTURER, LECTURE-SUBTITLES, LECTURE-CONTENT, QUIZ-CONTENT, QUIZGRADING, and QUIZ-DEADLINES, for which the scores increase by a large margin over SeededLDA. We observe that for LECTURE-CONTENT and QUIZ-CONTENT, the increase in scores is more significant than others with SeededLDA performing very poorly. Since both lecture and quiz content have the same kind of words related to the course material, SeededLDA is not able to distinguish between these two aspects. We found that in 63% of these missed predictions, SeededLDA predicts LECTURE-CONTENT, instead of QUIZ-CONTENT, and vice versa. In contrast, PSLJoint uses both coarse and fine SeededLDA scores and captures the dependency between a coarse aspect and its corresponding fine aspect. Therefore, PSL-Joint is able to distinguish between LECTURE-CONTENT and QUIZ-CONTENT. In the next section, we present some examples of posts that SEEDEDLDA misclassified but were predicted correctly by PSL-Joint. Table 10 presents results for the coarse-aspects. We observe that PSL-Joint performs better than SeededLDA for all classes. In particular for CERTIFICATE and QUIZ, PSL-Joint exhibits a marked increase in scores when compared to SeededLDA. This is also true for sentiment, for which the scores for NEUTRAL and NEGATIVE sentiment show significant improvement (Table 11). 80 Correct Label PSL SeededLDA Post QUIZ-CONTENT QUIZ-CONTENT LECTURE-CONTENT There is a typo or other mistake in the assignment instructions (e.g. essential information omitted) Type ID: programming-content Problem ID: programming-mistake Browser: Chrome 32 OS: Windows 7 QUIZ-CONTENT QUIZ-CONTENT LECTURE-CONTENT There is a typo or other mistake on the page (e.g. factual error information omitted) Week 4 Quiz Question 6: Question 6 When a user clicks on a View that has registered to show a Context Menu which one of the following methods will be called? LECTURE-AUDIO LECTURE-AUDIO LECTURE-SUBTITLES Thanks for the suggestion about downloading the video and referring to the subtitles. I will give that a try but I would also like to point out that what the others are saying is true for me too: The audio is just barely audible even when the volume on my computer is set to 100%. SOCIAL SOCIAL LECTURE-VIDEO Let’s start a group for discussing the lecture videos. Table 12: Example posts that PSL-Joint predicted correctly, but were misclassified by SeededLDA Correct Label Predicted Label Second Post Prediction LECTURE-CONTENT QUIZ-CONTENT LECTURE-CONTENT I have a difference of opinion to the answer for Question 6 too. It differs from what is presented in lecture 1. SOCIAL LECTURE-SUBTITLES SOCIAL Hello guys!!! I am ... The course materials are extraordinary. The subtitles are really helpful! Thanks to instructors for giving us all a wonderful opportunity. LECTURE-CONTENT QUIZ-CONTENT LECTURE-CONTENT As the second lecture video told me I started windows telnet and connected to the virtual device. Then I typed the same command for sending an sms that the lecture video told me to. The phone received a message all right and I was able to open it but the message itself seems to be written with some strange characters. Table 13: Example posts whose second-best prediction is correct 5.3 Interpreting PSL-Joint Predictions Table 12 presents some examples of posts that PSL-Joint predicted correctly, and which SeededLDA misclassified. The first two examples illustrate that PSL can predict the subtle difference between LECTURE-CONTENT and QUIZCONTENT. Particularly notable is the third example, which contains mention of both subtitles and audio, but the negative sentiment is associated with audio rather than subtitles. PSL-Joint predicts the fine aspect as LECTURE-AUDIO, even though the underlying SeededLDA feature has a high score for LECTURE-SUBTITLES. This example illustrates the strength of the joint reasoning approach in PSL-Joint. Finally, in the last example, the post mentions starting a group to discuss videos. This is an ambiguous post containing the keyword video, while it is in reality a social post about starting a group. PSL-Joint is able to predict this because it uses both the sentiment scores associated with the post and the SeededLDA scores for fine aspect, and infers that social posts are generally positive. So, combining the feature values for social aspect and positive sentiment, it is able to predict the fine aspect as SOCIAL correctly. The continuous valued output predictions produced by PSL-Joint allow us to rank the predicted variables by output prediction value. Analyzing the predictions for posts that PSL-Joint misclassified, we observe that for four out of nine fine aspects, more than 70% of the time the correct label is in the top three predictions. And, for all fine aspects, the correct label is found in the top 3 predictions around 40% of the time. Thus, using the top three predictions made by PSL-Joint, we can understand the fine aspect of the post to a great extent. Table 13 gives some examples of posts for which the second best prediction by PSL-Joint is the correct label. For these examples, we found that PSL-Joint misses the correct prediction by a small margin(< 0.2). Since our evaluation scheme only considers the maximum value to determine the scores, these examples were treated as misclassified. 5.4 Understanding Instructor Intervention using PSL-Joint Predictions In our 3275 annotated posts, the instructor replied to 787 posts. Of these, 699 posts contain a mention of some MOOC aspect. PSL-Joint predicts 97.8% from those as having an aspect and 46.9% as the correct aspect. This indicates that PSL-Joint is capable of identifying the most important posts, i.e. those that the instructor replied to, with high accuracy. PSL-Joint’s MOOC aspect predictions can potentially be used by the instructor to select a subset of posts to address in order to cover the main reported issues. We found in our data that some fine aspects, such as CERTIFICATE, have a higher percentage of instructor replies than others, such as QUIZ-GRADING. Using our system, instructors can sample from multiple aspect cate81 gories, thereby making sure that all categories of problems receive attention. 6 Conclusion In this paper, we developed a weakly supervised joint probabilistic model (PSL-Joint) for predicting aspect-sentiment in online courses. Our model provides the ability to conveniently encode domain information in the form of seed words, and weighted logical rules capturing the dependencies between aspects and sentiment. We validated our approach on an annotated dataset of MOOC posts sampled from twelve courses. We compared our PSL-Joint probabilistic model to a simpler SeededLDA approach, and demonstrated that PSL-Joint produced statistically significantly better results, exhibiting a 3–5 times improvement in F1 score in most cases over a system using only SeededLDA. As further shown by our qualitative results and instructor reply information, our system can potentially be used for understanding student requirements and issues, identifying posts for instructor intervention, increasing student retention, and improving future iterations of the course. Acknowledgements This work was supported by NSF grant IIS1218488, and IARPA via DoI/NBC contract number D12PC00337. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereon. Disclaimer: The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of IARPA, DoI/NBC, or the U.S. Government. References Stephen H. Bach, Bert Huang, Ben London, and Lise Getoor. 2013. Hinge-loss Markov random fields: Convex inference for structured prediction. In Proceedings of the Conference on Uncertainty in Artificial Intelligence (UAI). S. H. Bach, M. Broecheler, B. Huang, and L. Getoor. 2015. Hinge-loss Markov random fields and probabilistic soft logic. arXiv:1505.04406 [cs.LG]. David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent Dirichlet allocation. Journal of Machine Learning Research (JMLR). Samuel Brody and Noemie Elhadad. 2010. An unsupervised aspect-sentiment model for online reviews. In Proceedings of Human Language Technologies: Conference of the North American Chapter of the Association for Computational Linguistics (HLT). Snigdha Chaturvedi, Dan Goldwasser, and Hal Daum´e III. 2014. Predicting instructor’s intervention in mooc forums. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Qiming Diao, Minghui Qiu, Chao-Yuan Wu, Alexander J. Smola, Jing Jiang, and Chong Wang. 2014. Jointly modeling aspects, ratings and sentiments for movie recommendation (JMARS). In Proceedings of the SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD). Angela Fahrni and Manfred Klenner. 2008. Old wine or warm beer: Target-specific sentiment analysis of adjectives. In Proceedings of the Symposium on Affective Language in Human and Machine (AISB). Shobeir Fakhraei, Bert Huang, Louiqa Raschid, and Lise Getoor. 2014. Network-based drug-target interaction prediction with probabilistic soft logic. IEEE/ACM Transactions on Computational Biology and Bioinformatics (TCBB). Bert Huang, Angelika Kimmig, Lise Getoor, and Jennifer Golbeck. 2013. A flexible framework for probabilistic models of social trust. In Proceedings of the International Conference on Social Computing, Behavioral-Cultural Modeling, & Prediction (SBP). Jagadeesh Jagarlamudi, Hal Daum´e, III, and Raghavendra Udupa. 2012. Incorporating lexical priors into topic models. In Proceedings of the European Chapter of the Association for Computational Linguistics (EACL). Y. Jo and A.H. Oh. 2011. Aspect and sentiment unification model for online review analysis. In Proceedings of the International Conference on Web Search and Data Mining (WSDM). Suin Kim, Jianwen Zhang, Zheng Chen, Alice Oh, and Shixia Liu. 2013. A hierarchical aspect-sentiment model for online reviews. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Varada Kolhatkar, Heike Zinsmeister, and Graeme Hirst. 2013. Annotating anaphoric shell nouns with their antecedents. In Linguistic Annotation Workshop and Interoperability with Discourse. Chenghua Lin and Yulan He. 2009. Joint sentiment/topic model for sentiment analysis. In Proceedings of the Conference on Information and Knowledge Management (CIKM). Chenghua Lin, Yulan He, R. Everson, and S. Ruger. 2012. Weakly supervised joint sentiment-topic detection from text. IEEE Transactions on Knowledge and Data Engineering. 82 Bing Liu and Lei Zhang. 2012. A survey of opinion mining and sentiment analysis. In Mining Text Data. Edward Loper and Steven Bird. 2002. NLTK: The natural language toolkit. In Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics (ETMTNLP). Bin Lu, Myle Ott, Claire Cardie, and Benjamin K. Tsou. 2011. Multi-aspect sentiment analysis with topic models. In Proceedings of the International Conference on Data Mining Workshops (ICDMW). Arjun Mukherjee and Bing Liu. 2012. Aspect extraction through semi-supervised modeling. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACL). Jay Pujara, Hui Miao, Lise Getoor, and William Cohen. 2013. Knowledge graph identification. In International Semantic Web Conference (ISWC). Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daume III, and Lise Getoor. 2014a. Learning latent engagement patterns of students in online courses. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI). Arti Ramesh, Dan Goldwasser, Bert Huang, Hal Daum´e III, and Lise Getoor. 2014b. Understanding MOOC discussion forums using seeded lda. In ACL Workshop on Innovative Use of NLP for Building Educational Applications (BEA). Glenda S. Stump, Jennifer DeBoer, Jonathan Whittinghill, and Lori Breslow. 2013. Development of a framework to classify MOOC discussion forum posts: Methodology and challenges. In NIPS Workshop on Data Driven Education. Ivan Titov and Ryan McDonald. 2008. Modeling online reviews with multi-grain topic models. In Proceedings of the International Conference on World Wide Web (WWW). Wayne Xin Zhao, Jing Jiang, Hongfei Yan, and Xiaoming Li. 2010. Jointly modeling aspects and opinions with a maxEnt-LDA hybrid. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). 83
2015
8
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 825–835, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Non-linear Learning for Statistical Machine Translation Shujian Huang, Huadong Chen, Xinyu Dai and Jiajun Chen State Key Laboratory for Novel Software Technology Nanjing University Nanjing 210023, China {huangsj, chenhd, daixy, chenjj}@nlp.nju.edu.cn Abstract Modern statistical machine translation (SMT) systems usually use a linear combination of features to model the quality of each translation hypothesis. The linear combination assumes that all the features are in a linear relationship and constrains that each feature interacts with the rest features in an linear manner, which might limit the expressive power of the model and lead to a under-fit model on the current data. In this paper, we propose a nonlinear modeling for the quality of translation hypotheses based on neural networks, which allows more complex interaction between features. A learning framework is presented for training the non-linear models. We also discuss possible heuristics in designing the network structure which may improve the non-linear learning performance. Experimental results show that with the basic features of a hierarchical phrase-based machine translation system, our method produce translations that are better than a linear model. 1 Introduction One of the core problems in the research of statistical machine translation is the modeling of translation hypotheses. Each modeling method defines a score of a target sentence e = e1e2...ei...eI, given a source sentence f = f1f2...fj...fJ, where each ei is the ith target word and fj is the jth source word. The well-known modeling method starts from the Source-Channel model (Brown et al., 1993)(Equation 1). The scoring of e decomposes to the calculation of a translation model and a language model. Pr(e|f) = Pr(e)Pr(f|e)/Pr(f) (1) The modeling method is extended to log-linear models by Och and Ney (2002), as shown in Equation 2, where hm(e|f) is the mth feature function and λm is the corresponding weight. Pr(e|f) = pλM 1 (e|f) = exp[∑M m=1 λmhm(e|f)] ∑ e′ exp[∑M m=1 λmhm(e′|f)] (2) Because the normalization term in Equation 2 is the same for all translation hypotheses of the same source sentence, the score of each hypothesis, denoted by sL, is actually a linear combination of all features, as shown in Equation 3. sL(e) = M ∑ m=1 λmhm(e|f) (3) The log-linear models are flexible to incorporate new features and show significant advantage over the traditional source-channel models, thus become the state-of-the-art modeling method and are applied in various translation settings (Yamada and Knight, 2001; Koehn et al., 2003; Chiang, 2005; Liu et al., 2006). It is worth noticing that log-linear models try to separate good and bad translation hypotheses using a linear hyper-plane. However, complex interactions between features make it difficult to linearly separate good translation hypotheses from bad ones (Clark et al., 2014). Taking common features in a typical phrasebased (Koehn et al., 2003) or hierarchical phrasebased (Chiang, 2005) machine translation system as an example, the language model feature favors shorter hypotheses; the word penalty feature encourages longer hypotheses. The phrase translation probability feature selects phrases that occurs more frequently in the training corpus, which sometimes is long with a lower translation probability, as in translating named entities or idioms; 825 sometimes is short but with a high translation probability, as in translating verbs or pronouns. These three features jointly decide the choice of translations. Simply use the weighted sum of their values may not be the best choice for modeling translations. As a result, log-linear models may under-fit the data. This under-fitting may prevents the further improvement of translation quality. In this paper, we propose a non-linear modeling of translation hypotheses based on neural networks. The traditional features of a machine translation system are used as the input to the network. By feeding input features to nodes in a hidden layer, complex interactions among features are modeled, resulting in much stronger expressive power than traditional log-linear models. (Section 3) Employing a neural network for SMT modeling has two issues to be tackled. The first issue is the parameter learning. Log-linear models rely on minimum error rate training (MERT) (Och, 2003) to achieve best performance. When the scoring function become non-linear, the intersection points of these non-linear functions could not be effectively calculated and enumerated. Thus MERT is no longer suitable for learning the parameters. To solve the problem, we present a framework for effective training including several criteria to transform the training problem into a binary classification task, a unified objective function and an iterative training algorithm. (Section 4) The second issue is the structure of neural network. Single layer neural networks are equivalent to linear models; two-layer networks with sufficient nodes are capable of learning any continuous function (Bishop, 1995). Adding more layers into the network could model complex functions with less nodes, but also brings the problem of vanishing gradient (Erhan et al., 2009). We adapt a two-layer feed-forward neural network to keep the training process efficient. We notice that one major problem that prevents a neural network training reaching a good solution is that there are too many local minimums in the parameter space. Thus we discuss how to constrain the learning of neural networks with our intuitions and observations of the features. (Section 5) Experiments are conducted to compare various settings and verify the effectiveness of our proposed learning framework. Experimental results show that our framework could achieve better translation quality even with the same traditional features as previous linear models. (Section 6) 2 Related work Many research has been attempting to bring nonlinearity into the training of SMT. These efforts could be roughly divided into the following three categories. The first line of research attempted to reinterpret original features via feature transformation or additional learning. For example, Maskey and Zhou (2012) use a deep belief network to learn representations of the phrase translation and lexical translation probability features. Clark et al. (2014) used discretization to transform realvalued dense features into a set of binary indicator features. Lu et al. (2014) learned new features using a semi-supervised deep auto encoder. These work focus on the explicit representation of the features and usually employ extra learning procedure. Our proposed method only takes the original features, with no transformation, as the input. Feature transformation or combination are performed implicitly during the training of the network and integrated with the optimization of translation quality. The second line of research attempted to use non-linear models instead of log-linear models, which is most similar in spirit with our work. Duh and Kirchhoff (2008) used the boosting method to combine several results of MERT and achieved improvement in a re-ranking setting. Liu et al. (2013) proposed an additive neural network which employed a two-layer neural network for embedding-based features. To avoid local minimum, they still rely on a pre-training and posttraining from MERT or PRO. Comparing to these efforts, our proposed method takes a further step that it is integrated with iterative training, instead of re-ranking, and works without the help of any pre-trained linear models. The third line of research attempted to add non-linear features/components into the log-linear learning framework. Neural network based models are trained as language models (Vaswani et al., 2013; Auli and Gao, 2014), translation models (Gao et al., 2014) or joint language and translation models (Auli et al., 2013; Devlin et al., 2014). Liu et al. (2013) also introduced word embedding for source and target sides of the translation 826 input hidden layer output layer Mo Mh Figure 1: A two-layer feed-forward neural network. rules as local features. In this paper, we focus on enhancing the expressive power of the modeling, which is independent of the research of enhancing translation systems with new designed features. We believe additional improvement could be achieved by incorporating more features into our framework. 3 Non-linear Translation The non-linear modeling of translation hypotheses could be used in both phrase-based system and syntax-based systems. In this paper, we take the hierarchical phrase based machine translation system (Chiang, 2005) as an example and introduce how we fit the non-linearity into the system. 3.1 Two-layer Neural Networks We employ a two-layer neural network as the nonlinear model for scoring translation hypotheses. The structure of a typical two-layer feed-forward neural network includes an input layer, a hidden layer, and a output layer (as shown in Figure 1). We use the input layer to accept input features, the hidden layer to combine different input features, the output layer with only one node to output the model score for each translation hypothesis based on the value of hidden nodes. More specifically, the score of hypothesis e, denoted as sN, is defined as: sN(e) = σo(Mo·σh(Mh·hm 1 (e|f)+bh)+bo) (4) where M, b is the weight matrix, bias vector of the neural nodes, respectively; σ is the activation function, which is often set to non-linear functions such as the tanh function or sigmoid function; subscript h and o indicates the parameters of hidden layer and output layer, respectively. 3.2 Features We use the standard features of a typical hierarchical phrase based translation system(Chiang, 2005). Adding new features into the framework is left as a future direction. The features as listed as following: • p(α|γ) and p(γ|α): conditional probability of translating α as γ and translating α as γ, where α and γ is the left and right hand side of a initial phrase or hierarchical translation rule, respectively; • pw(α|γ) and pw(γ|α): lexical probability of translating words in α as words in γ and translating words in γ as words in α; • plm: language model probability; • wc: accumulated count of individual words generated during translation; • pc: accumulated count of initial phrases used; • rc: accumulated count of hierarchical rule phrases used; • gc: accumulated count of glue rule used in this hypothesis; • uc: accumulated count of unknown source word. which has no entry in the translation table; • nc: accumulated count of source phrases that translate into null; 3.3 Decoding The basic decoding algorithm could be kept almost the same as traditional phrase-based or syntax-based translation systems (Yamada and Knight, 2001; Koehn et al., 2003; Chiang, 2005; Liu et al., 2006). For example, in the experiments of this paper, we use a CKY style decoding algorithm following Chiang (2005). Our non-linear translation system is different from traditional systems in the way to calculate the score for each hypothesis. Instead of calculating the score as a linear combination, we use neural networks (Section 3.1) to perform a non-linear combination of feature values. We also use the cube-pruning algorithm (Chiang, 2005) to keep the decoding efficient. Although the non-linearity in model scores may cause more search errors (Huang and Chiang, 827 2007) finding the highest scoring hypothesis, in practice it still achieves reasonable results. 4 Non-linear Learning Framework Traditional machine translation systems rely on MERT to tune the weights of different features. MERT performs efficient search by enumerating the score function of all the hypotheses and using intersections of these linear functions to form the ”upper-envelope” of the model score function (Och, 2003). When the scoring function is non-linear, it is not feasible to find the intersections of these functions. In this section, we discuss alternatives to train the parameters for non-linear models. 4.1 Training Criteria The task of machine translation is a complex problem with structural output space. Decoding algorithms search for the translation hypothesis with the highest score, according to a given scoring function, from an exponentially large set of candidate hypotheses. The purpose of training is to select the scoring function, so that the function score the hypotheses ”correctly”. The correctness is often introduced by some extrinsic metrics, such as BLEU (Papineni et al., 2002). We denote the scoring function as s(f, e; ⃗θ), or simply s, which is parameterized by ⃗θ; denote the set of all translation hypotheses as C; denote the extrinsic metric as eval(·) 1. Note that, in linear cases, s is a linear function as in Equation 3, while in the non-linear case described in this paper, s is the scoring function in Equation 4. Ideally, the training objective is to select a scoring function ˆs, from all functions S, that scores the correct translation (or references) ˆe, higher than any other hypotheses (Equation 5). ˆs = {s ∈S|s(ˆe) > s(e) ∀e ∈C} (5) In practice, the candidate set C is exponentially large and hard to enumerate; the correct translation ˆe may not even exist in the current search space for various reasons, e.g. unknown source word. As a result, we use the n-best set Cnbest to approximate C, use the extrinsic metric eval(·) to evaluate the quality of hypotheses in Cnbest and use the following three alternatives as approximations to the ideal objective. 1In our experiments, we use sentence level BLEU with +1 smoothing as the evaluation metric. Best v.s. Rest (BR) To score the best hypothesis in the n-best set ˜e higher than the rest hypotheses. This objective is very similar to MERT in that it tries to optimize the score of ˜e and doesn’t concern about the ranking of rest hypotheses. In this case, ˜e is an approximation of ˆe. Best v.s. Worst (BW) To score the best hypothesis higher than the worst hypothesis in the n-best set. This objective is motivated by the practice of separating the ”hope” and ”fear” translation hypotheses (Chiang, 2012). We take a simpler strategy which uses the best and worst hypothesis in Cnbest as the ”hope” and ”fear” hypothesis, respectively, in order to avoid multi-pass decoding. Pairwise (PW) To score the better hypothesis in sampled hypothesis pairs higher than the worse one in the same pair. This objective is adapted from the Pairwise Ranking Optimization (PRO) (Hopkins and May, 2011), which tries to ranking all the hypotheses instead of selecting the best one. We use the same sampling strategy as their original paper. Note that each of the above criteria transforms the original problem of selecting best hypotheses from an exponential space to a certain pairwise comparison problem, which could be easily trained using binary classifiers. 4.2 Training Objective For the binary classification task, we use a hinge loss following Watanabe (2012). Because the network has a lot of parameters compared with the linear model, we use a L1 norm instead of L2 norm as the regularization term, to favor sparse solutions. We define our training objective function in Equation 6. arg min θ 1 N ∑ f∈D ∑ (e1,e2)∈T(f) δ(f, e1, e2; θ) + λ · ||θ||1 with δ(·) = max{s(f, e1; θ) −s(f, e2; θ) + 1, 0} (6) where D is the given training data; (e1, e2) is a training hypothesis-pair, with e1 to be the one with 828 higher eval(·) score; N is the total number of hypothesis-pairs in D; T(f), or simply T, is the set of hypothesis-pairs for each source sentence f. The set T is decided by the criterion used for training. For the BR setting, the best hypothesis is paired with every other hypothesis in the n-best list (Equation 7); while for the BW setting, it is only paired with the worst hypothesis (Equation 8). The generation of T in PW setting is the same with PRO sampling, we refer the readers to the original paper of Hopkins and May (2011). TBR = {(e1, e2)|e1 = arg max e∈Cnbest eval(e), e2 ∈Cnbest and e1 ̸= e2} (7) TBW = {(e1, e2)|e1 = arg max e∈Cnbest eval(e), e2 = arg min e∈Cnbest eval(e)} (8) 4.3 Training Procedure In standard training algorithm for classification, the training instances stays the same in each iteration. In machine translation, decoding algorithms usually return a very different n-best set with different parameters. This is due to the exponentially large size of search space. MERT and PRO extend the current n-best set by merging the n-best set of all previous iterations into a pool (Och, 2003; Hopkins and May, 2011). In this way, the enlarged n-best set may give a better approximation of the true hypothesis set C and may lead to better and more stable training results. We argue that the training should still focus on hypotheses obtained in current round, because in each iteration the searching for the n-best set is independent of previous iterations. To compromise the above two goals, in our practice, training hypothesis pairs are first generated from the current n-best set, then merged with the pairs generated from all previous iterations. In order to make the model focus more on pairs from current iteration, we assign pairs in previous iterations a small constant weight and assign pairs in current iteration a relatively large constant weight 2. This is inspired by the AdaBoost algorithm (Schapire, 1999) in weighting instances. Following the spirit of MERT, we propose a iterative training procedure (Algorithm 1). The 2In our experiments, we empirically set the constants to be 0.1 and 0.9, respectively. Algorithm 1 Iterative Training Algorithm Input: the set of training sentences D, max number of iteration I 1: θ0 ←RandomInit(), 2: for i = 0 to I do 3: Ti ←∅; 4: for each f ∈D do 5: Cnbest ←NbestDecode(f ; θi) 6: T ←GeneratePair(Cnbest) 7: Ti ←Ti ∪T 8: end for 9: Tall ←WeightedCombine(∪i−1 k=0Tk, Ti) 10: θi+1 ←Optimize(Tall, θi) 11: end for training starts by randomly initialized model parameters θ0 (line 1). In ith iteration, the decoding algorithm decodes each sentence f to get the n-best set Cnbest (line 5). Training hypothesis pairs T are extracted from Cnbest according to the training criterion described in Section 4.2 (line 6). Newly collected pairs Ti are combined with pairs from previous iterations before used for training (line 9). θi+1 is obtained by solving Equation 6 using the Conjugate Sub-Gradient method (Le et al., 2011) (line 10). 5 Structure of the Network Although neural networks bring strong expressive power to the modeling of translation hypothesis, training a neural network is prone to resulting in local minimum which may affect the training results. We speculate that one reason for these local minimums is that the structure of a well-connected network has too many parameters. Take a neural network with k nodes in the input layer and m nodes in the hidden layer as an example. Every node in the hidden layer is connected to each of the k input nodes. This simple structure resulting in at least k × m parameters. In Section 4.2, we use L1 norm in the objective function in order to get sparser solutions. In this section, we propose some constrained network structures according to our prior knowledge of the features. These structures have much less parameters or simpler structures comparing to original neural networks, thus reduce the possibility of getting stuck in local minimums. 829 5.1 Network with two-degree Hidden Layer We find the first pitfall of the standard two-layer neural network is that each node in the hidden layer receives input from every input layer node. Features used in SMT are usually manually designed, which has their concrete meanings. For a network of several hidden nodes, combining every features into every hidden node may be redundant and not necessary to represent the quality of a hypothesis. As a result, we take a harsh step and constrain the nodes in hidden layer to have a in-degree of two, which means each hidden node only accepts inputs from two input nodes. We do not use any other prior knowledge about features in this setting. So for a network with k nodes in the input layer, the hidden layer should contain C2 k = k(k −1)/2 nodes to accept all combinations from the input layer. We name this network structure as Two-Degree Hidden Layer Network (TDN). It is easy to see that a TDN has C2 k × 2 = k(k −1) parameters for the hidden layer because of the constrained degree. This is one order of magnitude less than a standard two-layer network with the same number of hidden nodes, which has C2 k × k = k2(k −1)/2 parameters. Note that we perform a 2-degree combination that looks similar in spirit with those combination of atomic features in large scale discriminative learning for other NLP tasks, such as POS tagging and parsing. However, unlike the practice in these tasks that directly combines values of different features to generate a new feature type, we first linearly combine the value of these features and perform non-linear transformation on these values via an activation function. 5.2 Network with Grouped Features It might be a too strong constraint to require the hidden node have in-degree of 2. In order to relax this constraint, we need more prior knowledge from the features. Our first observation is that there are different types of features. These types are different from each other in terms of value ranges, sources, importance, etc. For example, language model features usually take a very small value of probability, and word count feature takes a integer value and usually has a much higher weight in linear case than other count features. The second observation is that features of the same type may not have complex interaction with each other. For example, it is reasonable to combine language model features with word count features in a hidden node. But it may not be necessary to combine the count of initial phrases and the count of unknown words into a hidden node. Based on the above two intuitions, we design a new structure of network that has the following constraints: given a disjoint partition of features: G1, G2,..., Gk, every hidden node takes input from a set of input nodes, where any two nodes in this set come from two different feature groups. Under this constraint, the in-degree of a hidden node is at most k. We name this network structure as Grouped Network (GN). In practice, we divide the basic features in Section 3.2 into five groups: language model features, translation probability features, lexical probability features, the word count feature, and the rest of count features. This division considers not only the value ranges, but also types of features and the possibility of them interact with each other. 6 Experiments and Results 6.1 General Settings We conduct experiments on a large scale machine translation tasks. The parallel data comes from LDC, including LDC2002E18, LDC2003E14, LDC2004E12, LDC2004T08, LDC2005T10, LDC2007T09, which consists of 8.2 million of sentence pairs. Monolingual data includes Xinhua portion of Gigaword corpus. We use multi-references data MT03 as training data, MT02 as development data, and MT04, MT05 as test data. These data are mainly in the same genre, avoiding the extra consideration of domain adaptation. Data Usage Sents. LDC TM train 8,260,093 Gigaword LM train 14,684,074 MT03 train 919 MT02 dev 878 MT04 test 1,789 MT05 test 1,083 Table 1: Experimental data and statistics. The Chinese side of the corpora is word segmented using ICTCLAS3. Our translation sys3http://ictclas.nlpir.org/ 830 Criteria MT03(train) MT02(dev) MT04 MT05 BRc 35.02 36.63 34.96 34.15 BR 38.66 40.04 38.73 37.50 BW 39.55 39.36 38.72 37.81 PW 38.61 38.85 38.73 37.98 Table 2: BLEU4 in percentage on different training criteria (”BR”, ”BW” and ”PW” refer to experiments with ”Best v.s. Rest”, ”Best v.s. Worst” and ”Pairwise” training criteria, respectively. ”BRc” indicates generate hypothesis pairs from n-best set of current iteration only presented in Section 4.3. tem is an in-house implementation of the hierarchical phrase-based translation system(Chiang, 2005). We set the beam size to 20. We train a 5-gram language model on the monolingual data with MKN smoothing(Chen and Goodman, 1998). For each parameter tuning experiments, we ran the same training procedure 3 times and present the average results. The translation quality is evaluated use 4-gram case-insensitive BLEU (Papineni et al., 2002). Significant test is performed using bootstrap re-sampling implemented by Clark et al. (2011). We employ a two-layer neural network with 11 input layer nodes, corresponding to features listed in Section 3.2 and 1 output layer node. The number of nodes in the hidden layer varies in different settings. The sigmoid function is used as the activation function for each node in the hidden layer. For the output layer we use a linear activation function. We try different λ for the L1 norm from 0.01 to 0.00001 and use the one with best performance on the development set. We solve the optimization problem with ALGLIB package4. 6.2 Experiments of Training Criteria This set experiments evaluates different training criteria discussed in Section 4.1. We generate hypothesis-pair according to BW, BR and PW criteria, respectively, and perform training with these pairs. In the PW criterion, we use the sampling method of PRO (Hopkins and May, 2011) and get the 50 hypothesis pairs for each sentence. We use 20 hidden nodes for all three settings to make a fair comparison. The results are presented in Table 2. The first two rows compare training with and without the weighted combination of hypothesis pairs we discussed in Section 4.3. As the result suggested, with the weighted combination of hypothesis pairs from previous iterations, the performance improves significantly on both test sets. 4http://www.alglib.net/ Although the system performance on the dev set varies, the performance on test sets are almost comparable. This suggest that although the three training criteria are based on different assumptions, their are basically equivalent for training translation systems. Criteria Pairs/iteration Accuracy(%) BR 19 70.7 BW 1 79.5 PW 100 67.3 Table 3: Comparison of different training criteria in number of new instances per iteration and training accuracy. We also compares the three training criteria in their number of new instances per iteration and final training accuracy (Table 3). Compared to BR which tries to separate the best hypothesis from the rest hypotheses in the n-best set, and PW which tries to obtain a correct ranking of all hypotheses, BW only aims at separating the best and worst hypothesis of each iteration, which is a easier task for learning a classifiers. It requires the least training instances and achieves the best performance in training. Note that, the accuracy for each system in Table 3 are the accuracy each system achieves after training stops. They are not calculated on the same set of instances, thus not directly comparable. We use the differences in accuracy as an indicator for the difficulties of the corresponding learning task. For the rest of this paper, we use the BW criterion because it is much simpler compared to sampling method of PRO (Hopkins and May, 2011). 6.3 Experiments of Network Structures We make several comparisons of the network structures and compare them with a baseline hierarchical phrase-based translation system (HPB). Table 4 shows the translation performance of 831 Systems MT03(train) MT02(dev) MT04 MT05 Test Average HPB 39.25 39.07 38.81 38.01 38.41 TLayer20 39.55∗ 39.36∗ 38.72 37.81 38.27(-0.14) TLayer30 39.70+ 39.71∗ 38.89 37.90 38.40(-0.01) TLayer50 39.26 38.97 38.72 38.79+ 38.76(+0.35) TLayer100 39.42 38.77 38.65 38.65+ 38.69(+0.28) TLayer200 39.69 38.68 38.72 38.80+ 38.74(+0.32) TDN 39.60+ 38.94 38.99∗ 38.13 38.56(+0.15) GN 39.73+ 39.41+ 39.45+ 38.51+ 38.98(+0.57) Table 4: BLEU4 in percentage for comparing of systems using different network structures (HPB refers to the baseline hierarchical phrase-based system. TLayer, TDN, GN refer to the standard 2-layer network, Two-Degree Hidden Layer Network, Grouped Network, respectively. Subscript of TLayer indicates the number of nodes in the hidden layer.) +, ∗marks results that are significant better than the baseline system with p < 0.01 and p < 0.05. Systems # Hidden Nodes # Parameters Training Time per iter.(s) HPB 11 1041 TLayer20 20 261 671 TLayer30 30 391 729 TLayer50 50 651 952 TLayer100 100 1,301 1,256 TLayer200 200 2,601 2,065 TDN 55 221 808 GN 214 1,111 1,440 Table 5: Comparison of network scales and training time of different systems, including the number of nodes in the hidden layer, the number of parameters, the average training time per iteration (15 iterations). The notations of systems are the same as in Table4. different systems5. All 5 two-layer feed forward neural networks models could achieve comparable or better performance comparing to the baseline system. We can see that training a larger network may lead to better translation quality (from TLayer20 and TLayer30 to TLayer50). However, increasing the number of hidden node to 100 and 200 does not bring further improvement. One possible reason is that training a larger network with arbitrary connections brings in too many parameters which may be difficult to train with limited training data. TDN and GN are the two network structures proposed in Section 5. With the constraint that all input to the hidden node should be of degree 2, TDN performs comparable to the baseline system. With the grouped feature, we could design networks such as GN, which shows significant improvement over the baseline systems (+0.57) and achieves the best performance among all neural systems. 5TLayer20 is the same system as BW in Table 2 Table 4 shows statistics related to the efficiency issue of different systems. The baseline system (HPB) uses MERT for training. HPB has a very small number of parameters and searches for the best parameters exhaustively in each iteration. The non-linear systems with few nodes (TLayer20 and TLayer30) train faster than HPB in each iteration because they perform back-propagation instead of exhaustive search. We iterate 15 iterations for each non-linear system, while MERT takes about 10 rounds to reach its best performance. When the number of nodes in the hidden layer increases (from 20 to 200), the number of parameters in the system also increases, which requires longer time to compute the score for each hypothesis and to update the parameters through backpropagation. The network with 200 hidden nodes takes about twice the time to train for each iteration, compared to the linear system6. TDN and GN have larger numbers of hidden 6Matrix operation is CPU intensive. The cost will increase when multiple tasks are running. 832 nodes. However, because of our intuitions in designing the structure of the networks, the degree of the hidden node is constrained. So these two networks are sparser in parameters and take significant less training time than standard neural networks. For example, GN has a comparable number of hidden nodes with TLayer200, but only has half of its parameters and takes about 70% time to train in each iteration. In other words, our proposed network structure provides more efficient training in these cases and achieve better results. 7 Conclusion In this paper, we discuss a non-linear framework for modeling translation hypothesis for statistical machine translation system. We also present a learning framework including training criterion and algorithms to integrate our modeling into a state of the art hierarchical phrase based machine translation system. Compared to previous effort in bringing in non-linearity into machine translation, our method uses a single two-layer neural networks and performs training independent with any previous linear training methods (e.g. MERT). Our method also trains its parameters without any pre-training or post-training procedure. Experiment shows that our method could improve the baseline system even with the same feature as input, in a large scale Chinese-English machine translation task. In training neural networks with hidden nodes, we use heuristics to reduce the complexity of network structures and obtain extra advantages over standard networks. It shows that heuristics and intuitions of the data and features are still important to a machine translation system. Neural networks are able to perform feature learning by using hidden nodes to model the interaction among a large vector of raw features, as in image and speech processing (Krizhevsky et al., 2012; Hinton et al., 2012). We are trying to model the interaction between hand-crafted features, which is indeed similar in spirit with learning features from raw features. Although our features already have concrete meaning, e.g. the probability of translation, the fluency of target sentence, etc. Combining these features may have extra advantage in modeling the translation process. As future work, it is necessary to integrate more features into our learning framework. It is also interesting to see how the non-linear modeling fits in to more complex learning tasks which involves domain specific learning techniques. Acknowledgments The authors would like to thank Yue Zhang and the anonymous reviewers for their valuable comments. This work is supported by the National Natural Science Foundation of China (No. 61300158, 61223003), the Jiangsu Provincial Research Foundation for Basic Research (No. BK20130580). References Michael Auli and Jianfeng Gao. 2014. Decoder integration and expected BLEU training for recurrent neural network language models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 2: Short Papers, pages 136–142. Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. 2013. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1044–1054. Christopher M. Bishop. 1995. Neural Networks for Pattern Recognition. Oxford University Press, Inc., New York, NY, USA. Peter F. Brown, Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematic of statistical machine translation: Parameter estimation. Computational Linguistics, 19(2):263– 311. S. F. Chen and J. T. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical report, Computer Science Group, Harvard University, Technical Report TR-10-98. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In annual meeting of the Association for Computational Linguistics. David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. J. Mach. Learn. Res., 13(1):1159–1187, April. Jonathan H. Clark, Chris Dyer, Alon Lavie, and Noah A. Smith. 2011. Better hypothesis testing for statistical machine translation: Controlling for optimizer instability. In Proceedings of the 49th Annual 833 Meeting of the Association for Computational Linguistics: Human Language Technologies: Short Papers - Volume 2, HLT ’11, pages 176–181, Stroudsburg, PA, USA. Association for Computational Linguistics. Jonathan Clark, Chris Dyer, and Alon Lavie. 2014. Locally non-linear learning for statistical machine translation via discretization and structured regularization. Transactions of the Association for Computational Linguistics, 2:393–404. Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard M. Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 1370–1380. Kevin Duh and Katrin Kirchhoff. 2008. Beyond loglinear models: Boosted minimum error rate training for n-best re-ranking. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies: Short Papers, HLT-Short ’08, pages 37–40, Stroudsburg, PA, USA. Association for Computational Linguistics. Dumitru Erhan, Pierre antoine Manzagol, Yoshua Bengio, Samy Bengio, and Pascal Vincent. 2009. The difficulty of training deep architectures and the effect of unsupervised pre-training. In David V. Dyk and Max Welling, editors, Proceedings of the Twelfth International Conference on Artificial Intelligence and Statistics (AISTATS-09), volume 5, pages 153–160. Journal of Machine Learning Research - Proceedings Track. Jianfeng Gao, Xiaodong He, Wen-tau Yih, and Li Deng. 2014. Learning continuous phrase representations for translation modeling. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Volume 1: Long Papers, pages 699–709. Geoffrey Hinton, Li Deng, Dong Yu, George E Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, Andrew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N Sainath, et al. 2012. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. Signal Processing Magazine, IEEE, 29(6):82–97. Mark Hopkins and Jonathan May. 2011. Tuning as ranking. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 1352–1362, Stroudsburg, PA, USA. Association for Computational Linguistics. Liang Huang and David Chiang. 2007. Forest rescoring: Faster decoding with integrated language models. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 144–151, Prague, Czech Republic, June. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In HLTNAACL. Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton. 2012. Imagenet classification with deep convolutional neural networks. In F. Pereira, C.J.C. Burges, L. Bottou, and K.Q. Weinberger, editors, Advances in Neural Information Processing Systems 25, pages 1097–1105. Curran Associates, Inc. Quoc V. Le, Jiquan Ngiam, Adam Coates, Ahbik Lahiri, Bobby Prochnow, and Andrew Y. Ng. 2011. On optimization methods for deep learning. In Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28 - July 2, 2011, pages 265–272. Yang Liu, Qun Liu, and Shouxun Lin. 2006. Treeto-string alignment template for statistical machine translation. In Proceedings of the 44th Annual Meeting of the Association of Computational Linguistics. The Association for Computer Linguistics. Lemao Liu, Taro Watanabe, Eiichiro Sumita, and Tiejun Zhao. 2013. Additive neural networks for statistical machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, ACL 2013, 4-9 August 2013, Sofia, Bulgaria, Volume 1: Long Papers, pages 791– 801. Shixiang Lu, Zhenbiao Chen, and Bo Xu. 2014. Learning new semi-supervised deep auto-encoder features for statistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 122–132, Baltimore, Maryland, June. Association for Computational Linguistics. Sameer Maskey and Bowen Zhou. 2012. Unsupervised deep belief features for speech translation. In INTERSPEECH 2012, 13th Annual Conference of the International Speech Communication Association, Portland, Oregon, USA, September 9-13, 2012. Franz Josef Och and Hermann Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. pages 295–302. Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In ACL ’03: Proceedings of the 41st Annual Meeting on Association for Computational Linguistics, pages 160– 167, Morristown, NJ, USA. Association for Computational Linguistics. Kishore Papineni, Salim Roukos, Todd Ward, and WeiJing Zhu. 2002. Bleu: a method for automatic evaluation of machine translation. In ACL ’02: Proceedings of the 40th Annual Meeting on Association for 834 Computational Linguistics, pages 311–318, Morristown, NJ, USA. Association for Computational Linguistics. Robert E. Schapire. 1999. A brief introduction to boosting. In Proceedings of the 16th International Joint Conference on Artificial Intelligence - Volume 2, IJCAI’99, pages 1401–1406, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc. Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. 2013. Decoding with large-scale neural language models improves translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, EMNLP 2013, 18-21 October 2013, Grand Hyatt Seattle, Seattle, Washington, USA, A meeting of SIGDAT, a Special Interest Group of the ACL, pages 1387– 1392. Taro Watanabe. 2012. Optimized online rank learning for machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT ’12, pages 253–262, Stroudsburg, PA, USA. Association for Computational Linguistics. Kenji Yamada and Kevin Knight. 2001. A syntaxbased statistical translation model. In Proceedings of the 39th Annual Meeting of the Association of Computational Linguistics, pages 523–530. 835
2015
80
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 836–845, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Unifying Bayesian Inference and Vector Space Models for Improved Decipherment Qing Dou∗, Ashish Vaswani∗, Kevin Knight Information Sciences Institute Department of Computer Science University of Southern California {qdou,avaswani,knight}@isi.edu Chris Dyer School of Computer Science Carnegie Mellon University [email protected] Abstract We introduce into Bayesian decipherment a base distribution derived from similarities of word embeddings. We use Dirichlet multinomial regression (Mimno and McCallum, 2012) to learn a mapping between ciphertext and plaintext word embeddings from non-parallel data. Experimental results show that the base distribution is highly beneficial to decipherment, improving state-of-the-art decipherment accuracy from 45.8% to 67.4% for Spanish/English, and from 5.1% to 11.2% for Malagasy/English. 1 Introduction Tremendous advances in Machine Translation (MT) have been made since we began applying automatic learning techniques to learn translation rules automatically from parallel data. However, reliance on parallel data also limits the development and application of high-quality MT systems, as the amount of parallel data is far from adequate in low-density languages and domains. In general, it is easier to obtain non-parallel monolingual data. The ability to learn translations from monolingual data can alleviate obstacles caused by insufficient parallel data. Motivated by this idea, researchers have proposed different approaches to tackle this problem. They can be largely divided into two groups. The first group is based on the idea proposed by Rapp (1995), in which words are represented as context vectors, and two words are likely to be translations if their context vectors are similar. Initially, the vectors contained only context ∗Equal contribution words. Later extensions introduced more features (Haghighi et al., 2008; Garera et al., 2009; Bergsma and Van Durme, 2011; Daum´e and Jagarlamudi, 2011; Irvine and Callison-Burch, 2013b; Irvine and Callison-Burch, 2013a), and used more abstract representation such as word embeddings (Klementiev et al., 2012). Another promising approach to solve this problem is decipherment. It has drawn significant amounts of interest in the past few years (Ravi and Knight, 2011; Nuhn et al., 2012; Dou and Knight, 2013; Ravi, 2013) and has been shown to improve end-to-end translation. Decipherment views a foreign language as a cipher for English and finds a translation table that converts foreign texts into sensible English. Both approaches have been shown to improve quality of MT systems for domain adaptation (Daum´e and Jagarlamudi, 2011; Dou and Knight, 2012; Irvine et al., 2013) and low density languages (Irvine and Callison-Burch, 2013a; Dou et al., 2014). Meanwhile, they have their own advantages and disadvantages. While context vectors can take larger context into account, it requires high quality seed lexicons to learn a mapping between two vector spaces. In contrast, decipherment does not depend on any seed lexicon, but only looks at a limited n-gram context. In this work, we take advantage of both approaches and combine them in a joint inference process. More specifically, we extend previous work in large scale Bayesian decipherment by introducing a better base distribution derived from similarities of word embedding vectors. The main contributions of this work are: • We propose a new framework that combines the two main approaches to finding translations from monolingual data only. 836 • We develop a new base-distribution technique that improves state-of-the art decipherment accuracy by a factor of two for Spanish/English and Malagasy/English. • We make our software available for future research, functioning as a kind of GIZA for non-parallel data. 2 Decipherment Model In this section, we describe the previous decipherment framework that we build on. This framework follows Ravi and Knight (2011), who built an MT system using only non-parallel data for translating movie subtitles; Dou and Knight (2012) and Nuhn et al. (2012), who scaled decipherment to larger vocabularies; and Dou and Knight (2013), who improved decipherment accuracy with dependency relations between words. Throughout this paper, we use f to denote target language or ciphertext tokens, and e to denote source language or plaintext tokens. Given ciphertext f : f1...fn, the task of decipherment is to find a set of parameters P(fi|ei) that convert f to sensible plaintext. The ciphertext f can either be full sentences (Ravi and Knight, 2011; Nuhn et al., 2012) or simply bigrams (Dou and Knight, 2013). Since using bigrams and their counts speeds up decipherment, in this work, we treat f as bigrams, where f = {fn}N n=1 = {fn 1 , fn 2 }N n=1. Motivated by the idea from Weaver (1955), we model an observed cipher bigram fn with the following generative story: • First, a language model P(e) generates a sequence of two plaintext tokens en 1, en 2 with probability P(en 1, en 2). • Then, substitute en 1 with fn 1 and en 2 with fn 2 with probability P(fn 1 | en 1) · P(fn 2 | en 2). Based on the above generative story, the probability of any cipher bigram fn is: P(fn) = X e1e2 P(e1e2) 2 Y i=1 P(fn i | ei) The probability of the ciphertext corpus, P({fn}N n=1) = N Y n=1 P(fn) There are two sets of parameters in the model: the channel probabilities {P(f | e)} and the bigram language model probabilities {P(e′ | e)}, where f ranges over the ciphertext vocabulary and e, e′ range over the plaintext vocabulary. Given a plaintext bigram language model, the training objective is to learn P(f | e) that maximize P({fn}N n=1). When formulated like this, one can directly apply EM to solve the problem (Knight et al., 2006). However, EM has time complexity O(N ·V 2 e ) and space complexity O(Vf ·Ve), where Vf, Ve are the sizes of ciphertext and plaintext vocabularies respectively, and N is the number of cipher bigrams. This makes the EM approach unable to handle long ciphertexts with large vocabulary size. An alternative approach is Bayesian decipherment (Ravi and Knight, 2011). We assume that P(f | e) and P(e′ | e) are drawn from a Dirichet distribution with hyper-parameters αf,e and αe,e′, that is: P(f | e) ∼Dirichlet(αf,e) P(e | e′) ∼Dirichlet(αe,e′). The remainder of the generative story is the same as the noisy channel model for decipherment. In the next section, we describe how we learn the hyper parameters of the Dirichlet prior. Given αf,e and αe,e′, The joint likelihood of the complete data and the parameters, P({fn, en}N n=1, {P(f | e)}, {P(e | e′)}) = P({fn | en}N n=1, {P(f | e)}) P({en}N n=1, P(e | e′)) = Y e Γ P f αf,e  Q f Γ (αe,f) Y f P(f | e)#(e,f)+αe,f−1 Y e Γ P e′ αe,e′ Q e′ Γ αe,e′ Y f P(e | e′)#(e,e′)+αe,e′−1, (1) where #(e, f) and #(e, e′) are the counts of the translated word pairs and plaintext bigram pairs in the complete data, and Γ (·) is the Gamma function. Unlike EM, in Bayesian decipherment, we no longer search for parameters P(f | e) that maximize the likelihood of the observed ciphertext. Instead, we draw samples from posterior distribution of the plaintext sequences given the ciphertext. Under the above Bayesian decipherment model, it turns out that the probability of a particular cipher word fj having a value k, given the current plaintext word ej, and the samples for all 837 the other ciphertext and plaintext words, f−j and e−j, is: P(fj = k | ej, f−j, e−j) = #(k, ej)−j + αej,k #(ej)−j + P f αej,f . Where, #(k, ej)−j and #(ej)−j are the counts of the ciphertext, plaintext word pair and plaintext word in the samples excluding fj and ej. Similarly, the probability of a plaintext word ej taking a value l given samples for all other plaintext words, P(ej = l | e−j) = #(l, ej−1)−j + αl,ej−1 #(ej−1)−j + P e αe,ej−1 . (2) Since we have large amounts of plaintext data, we can train a high-quality dependency-bigram language model, PLM(e | e′) and use it to guide our samples and learn a better posterior distribution. For that, we define αe,e′ = αPLM(e | e′), and set α to be very high. The probability of a plaintext word (Equation 2) is now P(ej = l | e−j) ≈PLM(l | ej−1). (3) To sample from the posterior, we iterate over the observed ciphertext bigram tokens and use equations 2 and 3 to sample a plaintext token with probability P(ej | e−j, f) ∝PLM(ej | ej−1) PLM(ej+1 | ej)P(fj | ej, f−j, e−j). (4) In previous work (Dou and Knight, 2012), the authors use symmetric priors over the channel probabilities, where αe,f = α 1 Vf , and they set α to 1. Symmetric priors over word translation probabilities are a poor choice, as one would not apriori expect plaintext words and ciphertext words to cooccur with equal frequency. Bayesian inference is a powerful framework that allows us to inject useful prior information into the sampling process, a feature that we would like to use. In the next section, we will describe how we model and learn better priors using distributional properties of words. In subsequent sections, we show significant improvements over the baseline by learning better priors. 3 Base Distribution with Cross-Lingual Word Similarities As shown in the previous section, the base distribution in Bayesian decipherment is given independent of the inference process. A better base distribution can improve decipherment accuracy. Ideally, we should assign higher base distribution probabilities to word pairs that are similar. One straightforward way is to consider orthographic similarities. This works for closely related languages, e.g., the English word “new” is translated as “neu” in German and “nueva” in Spanish. However, this fails when two languages are not closely related, e.g., Chinese/English. Previous work aims to discover translations from comparable data based on word context similarities. This is based on the assumption that words appearing in similar contexts have similar meanings. The approach straightforwardly discovers monolingual synonyms. However, when it comes to finding translations, one challenge is to draw a mapping between the different context spaces of the two languages. In previous work, the mapping is usually learned from a seed lexicon. There has been much recent work in learning distributional vectors (embeddings) for words. The most popular approaches are the skip-gram and continuous-bag-of-words models (Mikolov et al., 2013a). In Mikolov et al. (2013b), the authors are able to successfully learn word translations using linear transformations between the source and target word vector-spaces. However, unlike our learning setting, their approach relied on large amounts of translation pairs learned from parallel data to train their linear transformations. Inspired by these approaches, we aim to exploit high-quality monolingual word embeddings to help learn better posterior distributions in unsupervised decipherment, without any parallel data. In the previous section, we incorporated our pre-trained language model in αe,e′ to steer our sampling. In the same vein, we model αe,f using pre-trained word embeddings, enabling us to improve our estimate of the posterior distribution. In Mimno and McCallum (2012), the authors develop topic models where the base distribution over topics is a log-linear model of observed document features, which permits learning better priors over topic distributions for each document. Similarly, we introduce a latent cross-lingual linear mapping M and define: 838 αf,e = exp{vT e Mvf}, (5) where ve and vf are the pre-trained plaintext word and ciphertext word embeddings. M is the similarity matrix between the two embedding spaces. αf,e can be thought of as the affinity of a plaintext word to be mapped to a ciphertext word. Rewriting the channel part of the joint likelihood in equation 1, P({fn | en}N n=1, {P(f | e)}) = Y e Γ P f exp{vT e Mvf}  Q f Γ (exp{vTe Mvf}) Y f P(f | e)#(e,f)+exp{vT e Mvf}−1 Integrating out the channel probabilities, the complete data log-likelihood of the observed ciphertext bigrams and the sampled plaintext bigrams, P({fn | en}) = Y e Γ P f exp{vT e Mvf}  Q f Γ (exp{vTe Mvf}) Y e Q f Γ exp{vT e Mvf} + #(e, f)  Γ P f exp{vTe Mvf} + #(e)  . We also add a L2 regularization penalty on the elements of M. The derivative of log P({fn | en} −λ 2 P i,j M2 i,j , where λ is the regularization weight, with respect to M, ∂log P({fn | en} −λ 2 P i,j M2 i,j ∂M = X e X f exp{vT e Mvf}vevT f Ψ  X f′ exp{vT e Mvf′}  − Ψ  X f′ exp{vT e Mvf′} + #(e)  + + Ψ exp{vT e Mvf} + #(e, f)  − Ψ exp{vT e Mvf}  −λM, where we use ∂exp{vT e Mvf} ∂M = exp{vT e Mvf}∂vT e Mvf ∂M = exp{vT e Mvf}vevT f . Ψ (·) is the Digamma function, the derivative of log Γ (·). Again, following Mimno and McCallum (2012), we train the similarity matrix M with stochastic EM. In the E-step, we sample plaintext words for the observed ciphertext using equation 4 and in the M-step, we learn M that maximizes log P({fn | en}) with stochastic gradient descent. The time complexity of computing the gradient is O(VeVf). However, significant speedups can be achieved by precomputing vevT f and exploiting GPUs for Matrix operations. After learning M, we can set αe,f = X f′ exp{vT e Mvf′} exp{vT e Mvf} P f′′ exp{vTe Mvf′′} = αeme,f, (6) where αe = P f′ exp{vT e Mvf′} is the concentration parameter and me,f = exp{vT e Mvf} P f′′ exp{vTe Mvf′′} is an element of the base measure me for plaintext word e. In practice, we find that αe can be very large, overwhelming the counts from sampling when we only have a few ciphertext bigrams. Therefore, we use me and set αe proportional to the data size. 4 Deciphering Spanish Gigaword In this section, we describe our data and experimental conditions for deciphering Spanish into English. 4.1 Data In our Spanish/English decipherment experiments, we use half of the Gigaword corpus as monolingual data, and a small amount of parallel data from Europarl for evaluation. We keep only the 10k most frequent word types for both languages and replace all other word types with “UNK”. We also exclude sentences longer than 40 tokens, which significantly slow down our parser. After preprocessing, the size of data for each language is shown in Table 1. While we use all the monolingual data shown in Table 1 to learn word embeddings, we only parse the AFP (Agence FrancePresse) section of the Gigaword corpus to extract 839 Spanish English Training 992 million 940 million (Gigaword) (Gigaword) Evaluation 1.1 million 1.0 million (Europarl) (Europarl) Table 1: Size of data in tokens used in Spanish/English decipherment experiment cipher dependency bigrams and build a plaintext language model. We also use GIZA (Och and Ney, 2003) to align Europarl parallel data to build a dictionary for evaluating our decipherment. 4.2 Systems We implement a baseline system based on the work described in Dou and Knight (2013). The baseline system carries out decipherment on dependency bigrams. Therefore, we use the Bohnet parser (Bohnet, 2010) to parse the AFP section of both Spanish and English versions of the Gigaword corpus. Since not all dependency relations are shared across the two languages, we do not extract all dependency bigrams. Instead, we only use bigrams with dependency relations from the following list: • Verb / Subject • Verb / Object • Preposition / Object • Noun / Noun-Modifier We denote the system that uses our new method as DMRE (Dirichlet Multinomial Regression with Embedings). The system is the same as the baseline except that it uses a base distribution derived from word embeddings similarities. Word embeddings are learned using word2vec (Mikolov et al., 2013a). For all the systems, language models are built using the SRILM toolkit (Stolcke, 2002). We use the modified Kneser-Ney (Kneser and Ney, 1995) algorithm for smoothing. 4.3 Sampling Procedure Motivated by the previous work, we use multiple random restarts and an iterative sampling process to improve decipherment (Dou and Knight, 2012). As shown in Figure 1, we start a few sampling processes each with a different random sample. Then results from different runs are combined to initiate the next sampling iteration. The details of the sampling procedure are listed below: Figure 1: Iterative sampling procedures 1. Extract dependency bigrams from parsing outputs and collect their counts. 2. Keep bigrams whose counts are greater than a threshold t. Then start N different randomly seeded and initialized sampling processes. Perform sampling. 3. At the end of sampling, extract word translation pairs (f, e) from the final sample. Estimate translation probabilities P(e|f) for each pair. Then construct a translation table by keeping translation pairs (f, e) seen in more than one decipherment and use the average P(e|f) as the new translation probability. 4. Start N different sampling processes again. Initialize the first samples with the translation pairs obtained from the previous step (for each dependency bigram f1, f2, find an English sequence e1, e2, whose P(e1|f1) · P(e2|f2) · P(e1, e2)is the highest). Initialize similarity matrix M with one learned by previous sampling process whose posterior probability is highest. Go to the third step, repeat until it converges. 5. Lower the threshold t to include more bigrams into the sampling process. Go to the second step, and repeat until t = 1. 840 The sampling process consists of sampling and learning of similarity matrix M. The sampling process creates training examples for learning M, and the new M is used to update the base distribution for sampling. In our Spanish/English decipherment experiments, we use 10 different random starts. As pointed out in section 3, setting αe to it’s theoretical value (equation 6) gives poor results as it can be quite large. In experiments, we set αe to a small value for the smaller data sets and increase it as more ciphtertext becomes available. We find that using the learned base distribution always improves decipherment accuracy, however, certain ranges are better for a given data size. We use αe values of 1, 2, and 5 for ciphertexts with 100k, 1 million, and 10 million tokens respectively. We leave automatic learning of αe for future work. 5 Deciphering Malagasy Despite spoken in Africa, Malagasy has its root in Asia, and belongs to the Malayo-Polynesian branch of the Austronesian language family. Malagasy and English have very different word order (VOS versus SVO). Generally, Malagasy is a typical head-initial language: Determiners precede nouns, while other modifiers and relative clauses follow nouns (e.g. ny “the” ankizilahy “boy” kely “little”). The significant differences in word order pose great challenges for both parsing and decipherment. 5.1 Data Table 2 lists the sizes of monolingual and parallel data used in this experiment, released by Dou et al. (2014). The monolingual data in Malagasy contains news text collected from Madagascar websites. The English monolingual data contains Gigaword and an additional 300 million tokens of African news. Parallel data (used for evaluation) is collected from GlobalVoices, a multilingual news website, where volunteers translate news into different languages. 5.2 Systems The baseline system is the same as the baseline used in Spanish/English decipherment experiments. We use data provided in previous work (Dou et al., 2014) to build a Malagasy dependency parser. For English, we use the Turbo parser, trained on the Penn Treebank (Martins et Malagasy English Training 16 million 1.2 billion (Web) (Gigaword and Web) Evaluation 2.0 million 1.8 million (GlobalVoices) (GlobalVoices) Table 2: Size of data in tokens used in Malagasy/English decipherment experiment. GlobalVoices is a parallel corpus. al., 2013). Because the Malagasy parser does not predict dependency relation types, we use the following head-child part-of-speech (POS) tag patterns to select a subset of dependency bigrams for decipherment: • Verb / Noun • Verb / Proper Noun • Verb / Personal Pronoun • Preposition / Noun • Preposision / Proper Noun • Noun / Adjective • Noun / Determiner • Noun / Verb Particle • Noun / Verb Noun • Noun / Cardinal • Noun / Noun 5.3 Sampling Procedure We use the same sampling protocol designed for Spanish/English decipherment. We double the number of random starts to 20. Further more, compared with Spanish/English decipherment, we find the base distribution plays a more important role in achieving higher decipherment accuracy for Malagasy/English. Therefore, we set αe to 10, 50, and 200 when deciphering 100k, 1 million, and 20 million token ciphtertexts, respectively. 6 Results In this section, we first compare decipherment accuracy of the baseline with our new approach. Then, we evaluate the quality of the base distribution through visualization. We use top-5 type accuracy as our evaluation metric for decipherment. Given a word type f in Spanish, we find top-5 translation pairs (f, e) ranked by P(e|f) from the learned decipherent translation table. If any pair (f, e) can also be found in a gold translation lexicon Tgold, we treat 841 Spanish/English Malagasy/English Top 5k 10k 5k 10k System Baseline DMRE Baseline DMRE Baseline DMRE Baseline DMRE 100k 1.9 12.4 1.1 7.1 1.2 2.7 0.6 1.4 1 million 7.3 37.7 4.2 23.6 2.5 5.8 1.3 3.2 10 million 29.0 64.7 23.4 43.7 5.4 11.2 3.0 6.9 100 million 45.8 67.4 39.4 58.1 N/A N/A N/A N/A Table 3: Spanish/English, Malagasy/English decipherment top-5 accuracy (%) of 5k and 10k most frequent word types the word type f as correctly deciphered. Let |C| be the number of word types correctly deciphered, and |V | be the total number of word types evaluated. We define type accuracy as |C| |V |. To create Tgold, we use GIZA to align a small amount of Spanish/English parallel text (1 million tokens for each language), and use the lexicon derived from the alignment as our gold translation lexicon. Tgold contains a subset of 4233 word types in the 5k most frequent word types, and 7479 word types in the top 10k frequent word types. We decipher the 10k most frequent Spanish word types to the 10k most frequent English word types, and evaluate decipherment accuracy on both the 5k most frequent word types as well as the full 10k word types. We evaluate accuracy for the 5k and 10k most frequent word types for each language pair, and present them in Table 3. Figure 2: Learning curves of top-5 accuracy evaluated on 5k most frequent word types for Spanish/English decipherment. We also present the learning curves of decipherment accuracy for the 5k most frequent word types. Figure 2 compares the baseline with DMRE in deciphering Spanish into English. Performance of the baseline is in line with previous work (Dou and Knight, 2013). (The accuracy reported here is higher as we evaluate top-5 accuracy for each word type.) With 100k tokens of Spanish text, the baseline achieves 1.9% accuracy, while DMRE reaches 12.4% accuracy, improving the baseline by over 6 times. Although the gains attenuate as we increase the number of ciphertext tokens, they are still large. With 100 million cipher tokens, the baseline achieves 45.8% accuracy, while DMRE reaches 67.4% accuracy. Figure 3: Learning curves of top-5 accuracy evaluated on 5k most frequent word types for Malagasy/English decipherment. Figure 3 compares the baseline with our new approach in deciphering Malagasy into English. With 100k tokens of data, the baseline achieves 1.2% accuracy, and DMRE improves it to 2.4%. We observe consistent improvement throughout the experiment. In the end, the baseline accuracy obtains 5.8% accuracy, and DMRE improves it to 11.2%. Low accuracy in Malagasy-English decipherment is attributed to the following factors: First, 842 compared with the Spanish parser, the Malagasy parser has lower parsing accuracy. Second, word alignment between Malagasy and English is more challenging, producing less correct translation pairs. Last but not least, the domain of the English language model is much closer to the domain of the Spanish monolingual text compared with that of Malagasy. Overall, we achieve large consistent gains across both language pairs. We hypothesize the gain comes from a better base distribution that considers larger context information. This helps prevent the language model driving deicpherment to a wrong direction. Since our learned transformation matrix M significantly improves decipherment accuracy, it’s likely that it is translation preserving, that is, plaintext words are transformed from their native vector space to points in the ciphertext such that translations are close to each other. To visualize this effect, we take the 5k most frequent plaintext words and transform them into new embeddings in the ciphertext embedding space ve′ = vT e M, where M is learned from 10 million Spanish bigram data. We then project the 5k most frequent ciphertext words and the projected plaintext words from the joint embedding space into a 2−dimensional space using t-sne (?). In Figure 4, we see an instance of a recurring phenomenon, where translation pairs are very close and sometimes even overlap each other, for example (judge, jueces), (secret, secretos). The word “magistrado” does not appear in our evaluation set. However, it is placed close to its possible translations. Thus, our approach is capable of learning word translations that cannot be discovered from limited parallel data. We often also see translation clusters, where translations of groups of words are close to each other. For example, in Figure 5, we can see that time expressions in Spanish are quite close to their translations in English. Although better quality translation visualizations (Mikolov et al., 2013b) have been presented in previous work, they exploit large amounts of parallel data to learn the mapping between source and target words, while our transformation is learned on non-parallel data. These results show that our approach can achieve high decipherment accuracy and discover novel word translations from non-parallel data. Figure 4: Translation pairs are often close and sometimes overlap each other. Words in spanish have been appended with spanish Figure 5: Semantic groups of word-translations appear close to each other. 7 Conclusion and Future Work We proposed a new framework that simultaneously performs decipherment and learns a crosslingual mapping of word embeddings. Our method is both theoretically appealing and practically powerful. The mapping is used to give decipherment a better base distribution. Experimental results show that our new algorithm improved state-of-the-art decipherment accuracy significantly: from 45.8% to 67.4% for Spanish/English, and 5.1% to 11.2% for Malagasy/English. This improvement could lead to further advances in using monolingual data to improve end-to-end MT. In the future, we will work on making the our approach scale to much larger vocabulary sizes using noise contrastive estimation (?), and apply it to improve MT systems. 843 Acknowledgments This work was supported by ARL/ARO (W911NF-10-1-0533) and DARPA (HR0011-12C-0014). References Shane Bergsma and Benjamin Van Durme. 2011. Learning bilingual lexicons using the visual similarity of labeled web images. In Proceedings of the Twenty-Second international joint conference on Artificial Intelligence - Volume Volume Three. AAAI Press. Bernd Bohnet. 2010. Top accuracy and fast dependency parsing is not a contradiction. In Proceedings of the 23rd International Conference on Computational Linguistics. Coling. Hal Daum´e, III and Jagadeesh Jagarlamudi. 2011. Domain adaptation for machine translation by mining unseen words. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Qing Dou and Kevin Knight. 2012. Large scale decipherment for out-of-domain machine translation. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Association for Computational Linguistics. Qing Dou and Kevin Knight. 2013. Dependencybased decipherment for resource-limited machine translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics. Qing Dou, Ashish Vaswani, and Kevin Knight. 2014. Beyond parallel data: Joint word alignment and decipherment improves machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Nikesh Garera, Chris Callison-Burch, and David Yarowsky. 2009. Improving translation lexicon induction from monolingual corpora via dependency contexts and part-of-speech equivalences. In Proceedings of the Thirteenth Conference on Computational Natural Language Learning. Association for Computational Linguistics. Aria Haghighi, Percy Liang, Taylor Berg-Kirkpatrick, and Dan Klein. 2008. Learning bilingual lexicons from monolingual corpora. In Proceedings of ACL08: HLT. Association for Computational Linguistics. Ann Irvine and Chris Callison-Burch. 2013a. Combining bilingual and comparable corpora for low resource machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation. Association for Computational Linguistics, August. Ann Irvine and Chris Callison-Burch. 2013b. Supervised bilingual lexicon induction with multiple monolingual signals. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Ann Irvine, Chris Quirk, and Hal Daume III. 2013. Monolingual marginal matching for translation model adaptation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). Association for Computational Linguistics. Alexandre Klementiev, Ivan Titov, and Binod Bhattarai. 2012. Inducing crosslingual distributed representations of words. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee. Reinhard Kneser and Hermann Ney. 1995. Improved backing-off for m-gram language modeling. IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP). Kevin Knight, Anish Nair, Nishit Rathod, and Kenji Yamada. 2006. Unsupervised analysis for decipherment problems. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Association for Computational Linguistics. Andre Martins, Miguel Almeida, and Noah A. Smith. 2013. Turning on the turbo: Fast third-order nonprojective turbo parsers. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Computational Linguistics. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Tomas Mikolov, Quoc V Le, and Ilya Sutskever. 2013b. Exploiting similarities among languages for machine translation. arXiv preprint arXiv:1309.4168. David Mimno and Andrew McCallum. 2012. Topic models conditioned on arbitrary features with dirichlet-multinomial regression. arXiv preprint arXiv:1206.3278. Malte Nuhn, Arne Mauser, and Hermann Ney. 2012. Deciphering foreign language by combining language models and context vectors. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers - Volume 1. Association for Computational Linguistics. Franz Josef Och and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics. 844 Reinhard Rapp. 1995. Identifying word translations in non-parallel texts. In Proceedings of the 33rd annual meeting on Association for Computational Linguistics. Association for Computational Linguistics. Sujith Ravi and Kevin Knight. 2011. Deciphering foreign language. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. Association for Computational Linguistics. Sujith Ravi. 2013. Scalable decipherment for machine translation via hash sampling. In Proceedings of the 51th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics. Andreas Stolcke. 2002. SRILM - an extensible language modeling toolkit. In Proceedings of the International Conference on Spoken Language Processing. Warren Weaver, 1955. Translation (1949). Reproduced in W.N. Locke, A.D. Booth (eds.). MIT Press. 845
2015
81
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 846–856, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Non-projective Dependency-based Pre-Reordering with Recurrent Neural Network for Machine Translation Antonio Valerio Miceli-Barone Universit`a di Pisa Largo B. Pontecorvo, 3 56127 Pisa, Italy [email protected] Giuseppe Attardi Universit`a di Pisa Largo B. Pontecorvo, 3 56127 Pisa, Italy [email protected] Abstract The quality of statistical machine translation performed with phrase based approaches can be increased by permuting the words in the source sentences in an order which resembles that of the target language. We propose a class of recurrent neural models which exploit source-side dependency syntax features to reorder the words into a target-like order. We evaluate these models on the German-to-English and Italian-toEnglish language pairs, showing significant improvements over a phrasebased Moses baseline. We also compare with state of the art German-toEnglish pre-reordering rules, showing that our method obtains similar or better results. 1 Introduction Statistical machine translation is typically performed using phrase-based systems (Koehn et al., 2007). These systems can usually produce accurate local reordering but they have difficulties dealing with the long-distance reordering that tends to occur between certain language pairs (Birch et al., 2008). The quality of phrase-based machine translation can be improved by reordering the words in each sentence of source-side of the parallel training corpus in a ”target-like” order and then applying the same transformation as a pre-processing step to input strings during execution. When the source-side sentences can be accurately parsed, pre-reordering can be performed using hand-coded rules. This approach has been successfully applied to German-to-English (Collins et al., 2005) and other languages. The main issue with it is that these rules must be designed for each specific language pair, which requires considerable linguistic expertise. Fully statistical approaches, on the other hand, learn the reordering relation from word alignments. Some of them learn reordering rules on the constituency (Dyer and Resnik, 2010) (Khalilov and Fonollosa, 2011) or projective dependency (Genzel, 2010), (Lerner and Petrov, 2013) parse trees of source sentences. The permutations that these methods can learn can be generally non-local (i.e. high distance) on the sentences but local (parent-child or sibling-sibling swaps) on the parse trees. Moreover, constituency or projective dependency trees may not be the ideal way of representing the syntax of nonanalytic languages such as German or Italian, which could be better described using non-projective dependency trees (Bosco and Lombardo, 2004). Other methods, based on recasting reordering as a combinatorial optimization problem (Tromble and Eisner, 2009), (Visweswariah et al., 2011), can learn to generate in principle arbitrary permutations, but they can only make use of minimal syntactic information (part-of-speech tags) and therefore can’t exploit the potentially valuable structural syntactic information provided by a parser. In this work we propose a class of reordering models which attempt to close this gap by 846 exploiting rich dependency syntax features and at the same time being able to process non-projective dependency parse trees and generate permutations which may be nonlocal both on the sentences and on the parse trees. We represent these problems as sequence prediction machine learning tasks, which we address using recurrent neural networks. We applied our model to reorder German sentences into an English-like word order as a pre-processing step for phrase-based machine translation, obtaining significant improvements over the unreordered baseline system and quality comparable to the handcoded rules introduced by Collins et al. (2005). We also applied our model to Italianto-English pre-reordering, obtaining a considerable improvement over the unreordered baseline. 2 Reordering as a walk on a dependency tree In order to describe the non-local reordering phenomena that can occur between language pairs such as German-to-English and Italianto-English, we introduce a reordering framework similar to (Miceli Barone and Attardi, 2013), based on a graph walk of the dependency parse tree of the source sentence. This framework doesn’t restrict the parse tree to be projective, and allows the generation of arbitrary permutations. Let f ≡( f1, f2, . . . , fL f ) be a source sentence, annotated by a rooted dependency parse tree: ∀j ∈1, . . . , L f , hj ≡PARENT(j) We define a walker process that walks from word to word across the edges of the parse tree, and at each steps optionally emits the current word, with the constraint that each word must be eventually emitted exactly once. Therefore, the final string of emitted words f ′ is a permutation of the original sentence f, and any permutation can be generated by a suitable walk on the parse tree. 2.1 Reordering automaton We formalize the walker process as a nondeterministic finite-state automaton. The state v of the automaton is the tuple v ≡ (j, E, a) where j ∈1, . . . , L f is the current vertex (word index), E is the set of emitted vertices, a is the last action taken by the automaton. The initial state is: v(0) ≡(root f , {}, null) where root f is the root vertex of the parse tree. At each step t, the automaton chooses one of the following actions: • EMIT: emit the word fj at the current vertex j. This action is enabled only if the current vertex has not been already emitted: j /∈E (j, E, a) EMIT →(j, E ∪{j}, EMIT) (1) • UP: move to the parent of the current vertex. Enabled if there is a parent and we did not just come down from it: hj ̸= null, a ̸= DOWNj (j, E, a) UP →(hj, E, UPj) (2) • DOWNj′: move to the child j′ of the current vertex. Enabled if the subtree s(j′) rooted at j′ contains vertices that have not been already emitted and if we did not just come up from it: hj′ = j, a ̸= UPj′, ∃k ∈s(j′) : k /∈E (j, E, a) DOWNj′ → (j′, E, DOWNj′) (3) The execution continues until all the vertices have been emitted. We define the sequence of states of the walker automaton during one run as an execution ¯v ∈GEN( f ). An execution also uniquely specifies the sequence of actions performed by the automation. The preconditions make sure that all execution of the automaton always end generating a permutation of the source sentence. Furthermore, no cycles are possible: progress is made at every step, and it is not possible to enter in an execution that later turns out to be invalid. Every permutation of the source sentence can be generated by some execution. In fact, each permutation f ′ can be generated by exactly one execution, which we denote as ¯v( f ′). 847 We can split the execution ¯v( f ′) into a sequence of L f emission fragments ¯vj( f ′), each ending with an EMIT action. The first fragment has zero or more DOWN∗ actions followed by one EMIT action, while each other fragment has a non-empty sequence of UP and DOWN∗actions (always zero or more UPs followed by zero or more DOWNs) followed by one EMIT action. Finally, we define an action in an execution as forced if it was the only action enabled at the step where it occurred. 2.2 Application Suppose we perform reordering using a typical syntax-based system which processes source-side projective dependency parse trees and is limited to swaps between pair of vertices which are either in a parentchild relation or in a sibling relation. In such execution the UP actions are always forced, since the ”walker” process never leaves a subtree before all its vertices have been emitted. Suppose instead that we could perform reordering according to an ”oracle”. The executions of our automaton corresponding to these permutations will in general contain unforced UP actions. We define these actions, and the execution fragments that exhibit them, as non-tree-local. In practice we don’t have access to a reordering ”oracle”, but for sentences pairs in a parallel corpus we can compute heuristic ”pseudo-oracle” reference permutations of the source sentences from word-alignments. Following (Al-Onaizan and Papineni, 2006), (Tromble and Eisner, 2009), (Visweswariah et al., 2011), (Navratil et al., 2012), we generate word alignments in both the source-to-target and the target-to-source directions using IBM model 4 as implemented in GIZA++ (Och et al., 1999) and then we combine them into a symmetrical word alignment using the ”grow-diag-final-and” heuristic implemented in Moses (Koehn et al., 2007). Given the symmetric word-aligned corpus, we assign to each source-side word an integer index corresponding to the position of the leftmost target-side word it is aligned to (attaching unaligned words to the following aligned word) and finally we perform a stable sort of source-side words according to this index. 2.3 Reordering example Consider the segment of a German sentence shown in fig. 1. The English-reordered segment ”die W¨ahrungsreserven anfangs lediglich dienen sollten zur Verteidigung” corresponds to the English: ”the reserve assets were originally intended to provide protection”. In order to compose this segment from the original German, the reordering automaton described in our framework must perform a complex sequence of moves on the parse tree: • Starting from ”sollten”, descend to ”dienen”, descent to ”W¨ahrungsreserven” and finally to ”die”. Emit it, then go up to ”W¨ahrungsreserven”, emit it and go up to ”dienen” and up again to ”sollten”. Note that the last UP is unforced since ”dienen” has not been emitted at that point and has also unemitted children. This unforced action indicates non-tree-local reordering. • Go down to ”anfangs”. Note that the in the parse tree edge crosses another edge, indicating non-projectivity. Emit ”anfangs” and go up (forced) back to ”sollten”. • Go down to ”dienen”, down to ”zur”, down to ”lediglich” and emit it. Go up (forced) to ”zur”, up (unforced) to ”dienen”, emit it, go up (unforced) to ”sollten”, emit it. Go down to ”dienen”, down to ”zur” emit it, go down to ”Verteidigung” and emit it. Correct reordering of this segment would be difficult both for a phrase-based system (since the words are further apart than both the typical maximum distortion distance and maximum phrase length) and for a syntaxbased system (due to the presence of nonprojectivity and non-tree-locality). 848 Figure 1: Section of the dependency parse tree of a German sentence. 3 Recurrent Neural Network reordering models Given the reordering framework described above, we could try to directly predict the executions as Miceli Barone and Attardi (2013) attempted with their version of the framework. However, the executions of a given sentence can have widely different lengths, which could make incremental inexact decoding such as beam search difficult due to the need to prune over partial hypotheses that have different numbers of emitted words. Therefore, we decided to investigate a different class of models which have the property that state transition happen only in correspondence with word emission. This enables us to leverage the technology of incremental language models. Using language models for reordering is not something new (Feng et al., 2010), (Durrani et al., 2011), (Bisazza and Federico, 2013), but instead of using a more or less standard n-gram language model, we are going to base our model on recurrent neural network language models (Mikolov et al., 2010). Neural networks allow easy incorporation of multiple types of features and can be trained more specifically on the types of sequences that will occur during decoding, hence they can avoid wasting model space to represent the probabilities of nonpermutations. 3.1 Base RNN-RM Let f ≡( f1, f2, . . . , fL f ) be a source sentence. We model the reordering system as a deterministic single hidden layer recurrent neural network: v(t) = τ(Θ(1) · x(t) + ΘREC · v(t −1)) (4) where x(t) ∈Rn is a feature vector associated to the t-th word in a permutation f ′, v(0) ≡ vinit, Θ(1) and ΘREC are parameters1 and τ(·) is the hyperbolic tangent function. If we know the first t −1 words of the permutation f ′ in order to compute the probability distribution of the t-th word we do the following: • Iteratively compute the state v(t −1) from the feature vectors x(1), . . . , x(t − 1). • For the all the indices of the words that haven’t occurred in the permutation so far j ∈J(t) ≡([1, L f ] −¯it−1:), compute a score hj,t ≡ho(v(t −1), xo(j)), where xo(·) is the feature vector of the candidate target word. • Normalize the scores using the logistic softmax function: P( ¯It = j| f, ¯it−1:, t) = exp(hj,t) ∑j′∈J(t) exp(hj′,t). The scoring function ho(v(t −1), xo(j)) applies a feed-forward hidden layer to the feature inputs xo(j), and then takes a weighed inner product between the activation of this layer and the state v(t −1). The result is then linearly combined to an additional feature equal to the logarithm of the remaining words in the permutation (L f −t),2 and to a bias feature: hj,t ≡< τ(Θ(o) · xo(j)), θ(2) ⊙v(t −1) > + θ(α) · log(L f −t) + θ(bias) (5) where hj,t ≡ho(v(t −1), xo(j)). 1we don’t use a bias feature since it is redundant when the layer has input features encoded with the ”one-hot” encoding 2since we are then passing this score to a softmax of variable size (L f −t), this feature helps the model to keep the score already approximately scaled. 849 We can compute the probability of an entire permutation f ′ just by multiplying the probabilities for each word: P( f ′| f ) = P( ¯I = ¯i| f ) = ∏ L f t=1 P( ¯It = ¯it| f, t) 3.1.1 Training Given a training set of pairs of sentences and reference permutations, the training problem is defined as finding the set of parameters θ ≡ (vinit, Θ(1), θ(2), ΘREC, Θ(o), θ(α), θ(bias)) which minimize the per-word empirical cross-entropy of the model w.r.t. the reference permutations in the training set. Gradients can be efficiently computed using backpropagation through time (BPTT). In practice we used the following training architecture: Stochastic gradient descent, with each training pair ( f, f ′) considered as a single minibatch for updating purposes. Gradients computed using the automatic differentiation facilities of Theano (Bergstra et al., 2010) (which implements a generalized BPTT). No truncation is used. L2-regularization 3. Learning rates dynamically adjusted per scalar parameter using the AdaDelta heuristic (Zeiler, 2012). Gradient clipping heuristic to prevent the ”exploding gradient” problem (Graves, 2013). Early stopping w.r.t. a validation set to prevent overfitting. Uniform random initialization for parameters other than the recurrent parameter matrix ΘREC. Random initialization with echo state property for ΘREC, with contraction coefficient σ = 0.99 (Jaeger, 2001), (Gallicchio and Micheli, 2011). Training time complexity is O(L2 f ) per sentence, which could be reduced to O(L f ) using truncated BTTP at the expense of update accuracy and hence convergence speed. Space complexity is O(L f ) per sentence. 3.1.2 Decoding In order to use the RNN-RM model for prereordering we need to compute the most likely permutation ∗ f ′ of the source sentence f: ∗ f ′ ≡argmax f ′∈GEN( f ) P( f ′| f ) (6) 3λ = 10−4 on the recurrent matrix, λ = 10−6 on the final layer, per minibatch. Solving this problem to the global optimum is computationally hard4, hence we solve it to a local optimum using a beam search strategy. We generate the permutation incrementally from left to right. Starting from an initial state consisting of an empty string and the initial state vector vinit, at each step we generate all possible successor states and retain the Bmost probable of them (histogram pruning), according to the probability of the entire prefix of permutation they represent. Since RNN state vectors do not decompose in a meaningful way, we don’t use any hypothesis recombination. At step t there are L f −t possible successor states, and the process always takes exactly L f steps5, therefore time complexity is O(B · L2 f ) and space complexity is O(B). 3.1.3 Features We use two different feature configurations: unlexicalized and lexicalized. In the unlexicalized configuration, the state transition input feature function x(j) is composed by the following features, all encoded using the ”one-hot” encoding scheme: • Unigram: POS(j), DEPREL(j), POS(j) ∗ DEPREL(j). Left, right and parent unigram: POS(k), DEPREL(k), POS(k) ∗ DEPREL(k), where k is the index of respectively the word at the left (in the original sentence), at the right and the dependency parent of word j. Unique tags are used for padding. • Pair features: POS(j) ∗POS(k), POS(j) ∗ DEPREL(k), DEPREL(j) ∗POS(k), DEPREL(j) ∗DEPREL(k), for k defined as above. • Triple features POS(j) ∗POS(le f tj) ∗ POS(rightj), POS(j) ∗POS(le f tj) ∗ POS(parentj), POS(j) ∗POS(rightj) ∗ POS(parentj). • Bigram: POS(j) ∗POS(k), POS(j) ∗ DEPREL(k), DEPREL(j) ∗POS(k) where k is the previous emitted word in the permutation. 4NP-hard for at least certain choices of features and parameters 5actually, L f −1, since the last choice is forced 850 • Topological features: three binary features which indicate whether word j and the previously emitted word are in a parent-child, child-parent or siblingsibling relation, respectively. The target word feature function xo(j) is the same as x(j) except that each feature is also conjoined with a quantized signed distance6 between word j and the previous emitted word. Feature value combinations that appear less than 100 times in the training set are replaced by a distinguished ”rare” tag. The lexicalized configuration is equivalent to the unlexicalized one except that x(j) and xo(j) also have the surface form of word j (not conjoined with the signed distance). 3.2 Fragment RNN-RM The Base RNN-RM described in the previous section includes dependency information, but not the full information of reordering fragments as defined by our automaton model (sec. 2). In order to determine whether this rich information is relevant to machine translation pre-reordering, we propose an extension, denoted as Fragment RNNRM, which includes reordering fragment features, at expense of a significant increase of time complexity. We consider a hierarchical recurrent neural network. At top level, this is defined as the previous RNN. However, the x(j) and xo(j) vectors, in addition to the feature vectors described as above now contain also the final states of another recurrent neural network. This internal RNN has a separate clock and a separate state vector. For each step t of the top-level RNN which transitions between word f ′(t −1) and f ′(t), the internal RNN is reinitialized to its own initial state and performs multiple internal steps, one for each action in the fragment of the execution that the walker automaton must perform to walk between words f ′(t −1) and f ′(t) in the dependency parse (with a special shortcut of length one if they are adjacent in f with monotonic relative order). 6values greater than 5 and smaller than 10 are quantized as 5, values greater or equal to 10 are quantized as 10. Negative values are treated similarly. The state transition of the inner RNN is defined as: vr(t) = τ(Θ(r1) · xr(tr) + ΘrREC · vr(tr −1))(7) where xr(tr) is the feature function for the word traversed at inner time tr in the execution fragment. vr(0) = vinit r , Θ(r1) and ΘrREC are parameters. Evaluation and decoding are performed essentially in the same was as in Base RNNRM, except that the time complexity is now O(L3 f ) since the length of execution fragments is O(L f ). Training is also essentially performed in the same way, though gradient computation is much more involved since gradients propagate from the top-level RNN to the inner RNN. In our implementation we just used the automatic differentiation facilities of Theano. 3.2.1 Features The unlexicalized features for the inner RNN input vector xr(tr) depend on the current word in the execution fragment (at index tr), the previous one and the action label: UP, DOWN or RIGHT (shortcut). EMIT actions are not included as they always implicitly occur at the end of each fragment. Specifically the features, encoded with the ”one-hot” encoding are: A ∗POS(tr) ∗ POS(tr −1), A ∗POS(tr) ∗DEPREL(tr − 1), A ∗DEPREL(tr) ∗POS(tr −1), A ∗ DEPREL(tr) ∗DEPREL(tr −1). These features are also conjoined with the quantized signed distance (in the original sentence) between each pair of words. The lexicalized features just include the surface form of each visited word at tr. 3.3 Base GRU-RM We also propose a variant of the Base RNNRM where the standard recurrent hidden layer is replaced by a Gated Recurrent Unit layer, recently proposed by Cho et al. (2014) for machine translation applications. The Base GRU-RM is defined as the Base RNN-RM of sec. 3.1, except that the recurrence relation 4 is replaced by fig. 2 Features are the same of unlexicalized Base RNN-RM (we experienced difficulties training the Base GRU-RM with lexicalized features). 851 vrst(t) = π(Θ(1) rst · x(t) + ΘREC rst · v(t −1)) vupd(t) = π(Θ(1) upd · x(t) + ΘREC upd · v(t −1)) vraw(t) = τ(Θ(1) · x(t) + ΘREC · v(t −1) ⊙vupd(t)) v(t) = vrst(t) ⊙v(t −1) + (1 −vrst(t)) ⊙vraw(t) (8) Figure 2: GRU recurrence equations. vrst(t) and vupd(t) are the activation vectors of the ”reset” and ”update” gates, respectively, and π(·) is the logistic sigmoid function. . Training is also performed in the same way except that we found more beneficial to convergence speed to optimize using Adam (Kingma and Ba, 2014) 7 rather than AdaDelta. In principle we could also extend the Fragment RNN-RM into a Fragment GRU-RM, but we did not investigate that model in this work. 4 Experiments We performed German-to-English prereordering experiments with Base RNN-RM (both unlexicalized and lexicalized), Fragment RNN-RM and Base GRU-RM. In order to validate the experimental results on a different language pair, we additionally performed an Italian-to-English prereordering experiment with the Base GRURM, after assessing that this was the model that obtained the largest improvement on German-to-English. 4.1 Setup The German-to-English baseline phrasebased system was trained on the Europarl v7 corpus (Koehn, 2005). We randomly split it in a 1,881,531 sentence pairs training set, a 2,000 sentence pairs development set (used for tuning) and a 2,000 sentence pairs test set. The English language model was trained on the English side of the parallel corpus augmented with a corpus of sentences from AP News, for a total of 22,891,001 sentences. The baseline system is phrase-based Moses in a default configuration with maximum distortion distance equal to 6 and lexicalized reordering enabled. Maximum phrase size is 7with learning rate 2 · 10−5 and all the other hyperparameters equal to the default values in the article. equal to 7. The language model is a 5-gram IRSTLM/KenLM. The pseudo-oracle system was trained on the training and tuning corpus obtained by permuting the German source side using the heuristic described in section 2.2 and is otherwise equal to the baseline system. In addition to the test set extracted from Europarl, we also used a 2,525 sentence pairs test set (”news2009”) a 3,000 sentence pairs ”challenge” set used for the WMT 2013 translation task (”news2013”). The Italian-to-English baseline system was trained on a parallel corpus assembled from Europarl v7, JRC-ACQUIS v2.2 (Steinberger et al., 2006) and additional bilingual articles crawled from online newspaper websites8, totaling 3,081,700 sentence pairs, which were split into a 3,075,777 sentence pairs phrasetable training corpus, a 3,923 sentence pairs tuning corpus, and a 2,000 sentence pairs test corpus. Non-projective dependency parsing for our models, both for German and Italian was performed with the DeSR transition-based parser (Attardi, 2006). We also trained a German-to-English Moses system with pre-reordering performed by Collins et al. (2005) rules, implemented by Howlett and Dras (2011). Constituency parsing for Collins et al. (2005) rules was performed with the Berkeley parser (Petrov et al., 2006). For Italian-to-English we did not compare with a hand-coded reordering system as we are not aware of any strong pre-reordering baseline for this language pair. For our experiments, we extract approxi8Corriere.it and Asianews.it 852 mately 300,000 sentence pairs from the Moses training set based on a heuristic confidence measure of word-alignment quality (Huang, 2009), (Navratil et al., 2012). We randomly removed 2,000 sentences from this filtered dataset to form a validation set for early stopping, the rest were used for training the prereordering models. 4.2 Results The hidden state size s of the RNNs was set to 100 while it was set to 30 for the GRU model, validation was performed every 2,000 training examples. After 50 consecutive validation rounds without improvement, training was stopped and the set of training parameters that resulted in the lowest validation crossentropy were saved. Training took approximately 1.5 days for the unlexicalized Base RNN-RM, 2.5 days for the lexicalized Base RNN-RM and for the unlexicalized Base GRU-RM and 5 days for the unlexicalized Fragment RNN-RM on a 24-core machine without GPU (CPU load never rose to more than 400%). Decoding was performed with a beam size of 4. Decoding the whole German corpus took about 1.0-1.2 days for all the models except Fragment RNN-RM for which it took about 3 days. Decoding for the Italian corpus for the Base GRU-RM took approximately 1.5 days. Effects on monolingual reordering score are shown in fig. 3 (German) and fig. 4 (Italian), effects on translation quality are shown in fig. 5 (German-to-English) and fig. 6 (Italian-to-English)9. 4.3 Discussion and analysis All our German-to-English models significantly improved over the phrase-based baseline, performing as well as or almost as well as (Collins et al., 2005), which is an interesting result since our models doesn’t require any specific linguistic expertise. Surprisingly, the lexicalized version of Base RNN-RM performed worse than the unlexi9Although the baseline systems were trained on the same datasets used in Miceli Barone and Attardi (2013), the results are different since we used a different version of Moses calized one. This goes contrary to expectation as neural language models are usually lexicalized and in fact often use nothing but lexical features. The unlexicalized Fragment RNN-RM was quite accurate but very expensive both during training and decoding, thus it may not be practical. The unlexicalized Base GRU-RM performed very well, especially on the Europarl dataset (where all the scores are much higher than the other datasets) and it never performed significantly worse than the unlexicalized Fragment RNN-RM which is much slower. We also performed exploratory experiments with different feature sets (such as lexical-only features) but we couldn’t obtain a good training error. Larger network sizes should increase model capacity and may possibly enable training on simpler feature sets. The Italian-to-English experiment with Base GRU-RM confirmed that this model performs very well on a language pair with different reordering phenomena than Germanto-English. 5 Conclusions We presented a class of statistical syntaxbased, non-projective, non-tree-local prereordering systems for machine translation. Our systems processes source sentences parsed with non-projective dependency parsers and permutes them into a targetlike word order, suitable for translation by an appropriately trained downstream phrase-based system. The models we proposed are completely trained with machine learning approaches and is, in principle, capable of generating arbitrary permutations, without the hard constraints that are commonly present in other statistical syntax-based pre-reordering methods. Practical constraints depend on the choice of features and are therefore quite flexible, allowing a trade-off between accuracy and speed. In our experiments with the RNN-RM and GRU-RM models we managed to achieve translation quality improvements compara853 Reordering BLEU improvement none 62.10 unlex. Base RNN-RM 64.03 +1.93 lex. Base RNN-RM 63.99 +1.89 unlex. Fragment RNN-RM 64.43 +2.33 unlex. Base GRU-RM 64.78 +2.68 Figure 3: German ”Monolingual” reordering scores (upstream system output vs. ”oracle”permuted German) on the Europarl test set. All improvements are significant at 1% level. Reordering BLEU improvement none 73.11 unlex. Base GRU-RM 81.09 +7.98 Figure 4: Italian ”Monolingual” reordering scores on the Europarl test set. All improvements are significant at 1% level. Test set system BLEU improvement Europarl baseline 33.00 Europarl ”oracle” 41.80 +8.80 Europarl Collins 33.52 +0.52 Europarl unlex. Base RNN-RM 33.41 +0.41 Europarl lex. Base RNN-RM 33.38 +0.38 Europarl unlex. Fragment RNN-RM 33.54 +0.54 Europarl unlex. Base GRU-RM 34.15 +1.15 news2013 baseline 18.80 news2013 Collins NA NA news2013 unlex. Base RNN-RM 19.19 +0.39 news2013 lex. Base RNN-RM 19.01 +0.21 news2013 unlex. Fragment RNN-RM 19.27 +0.47 news2013 unlex. Base GRU-RM 19.28 +0.48 news2009 baseline 18.09 news2009 Collins 18.74 +0.65 news2009 unlex. Base RNN-RM 18.50 +0.41 news2009 lex. Base RNN-RM 18.44 +0.35 news2009 unlex. Fragment RNN-RM 18.60 +0.51 news2009 unlex. Base GRU-RM 18.58 +0.49 Figure 5: German-to-English RNN-RM translation scores. All improvements are significant at 1% level. Test set system BLEU improvement Europarl baseline 29.58 Europarl unlex. Base GRU-RM 30.84 +1.26 Figure 6: Italian-to-English RNN-RM translation scores. Improvement is significant at 1% level. 854 ble to those of the best hand-coded prereordering rules. References Yaser Al-Onaizan and Kishore Papineni. 2006. Distortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th Annual Meeting of the Association for Computational Linguistics, ACL-44, pages 529–536, Stroudsburg, PA, USA. Association for Computational Linguistics. Giuseppe Attardi. 2006. Experiments with a multilanguage non-projective dependency parser. In Proceedings of the Tenth Conference on Computational Natural Language Learning, CoNLL-X ’06, pages 166–170, Stroudsburg, PA, USA. Association for Computational Linguistics. James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pascal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. 2010. Theano: a CPU and GPU math expression compiler. In Proceedings of the Python for Scientific Computing Conference (SciPy), June. Oral Presentation. Alexandra Birch, Miles Osborne, and Philipp Koehn. 2008. Predicting success in machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’08, pages 745–754, Stroudsburg, PA, USA. Association for Computational Linguistics. Arianna Bisazza and Marcello Federico. 2013. Efficient solutions for word reordering in German-English phrase-based statistical machine translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 440–451, Sofia, Bulgaria, August. Association for Computational Linguistics. Cristina Bosco and Vincenzo Lombardo. 2004. Dependency and relational structure in treebank annotation. In COLING 2004 Recent Advances in Dependency Grammar, pages 1–8. Kyunghyun Cho, Bart van Merri¨enboer, Dzmitry Bahdanau, and Yoshua Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259. Michael Collins, Philipp Koehn, and Ivona Kuˇcerov´a. 2005. Clause restructuring for statistical machine translation. In Proceedings of the 43rd annual meeting on association for computational linguistics, pages 531–540. Association for Computational Linguistics. Nadir Durrani, Helmut Schmid, and Alexander Fraser. 2011. A joint sequence translation model with integrated reordering. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1045–1054. Association for Computational Linguistics. Chris Dyer and Philip Resnik. 2010. Context-free reordering, finite-state translation. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, HLT ’10, pages 858–866, Stroudsburg, PA, USA. Association for Computational Linguistics. Minwei Feng, Arne Mauser, and Hermann Ney. 2010. A source-side decoding sequence model for statistical machine translation. In Conference of the Association for Machine Translation in the Americas (AMTA). C. Gallicchio and A. Micheli. 2011. Architectural and markovian factors of echo state networks. Neural Networks, 24(5):440 – 456. Dmitriy Genzel. 2010. Automatically learning source-side reordering rules for large scale machine translation. In Proceedings of the 23rd International Conference on Computational Linguistics, COLING ’10, pages 376–384, Stroudsburg, PA, USA. Association for Computational Linguistics. Alex Graves. 2013. Generating sequences with recurrent neural networks. arXiv preprint arXiv:1308.0850. Susan Howlett and Mark Dras. 2011. Clause restructuring for SMT not absolutely helpful. In Proceedings of the 49th Annual Meeting of the Assocation for Computational Linguistics: Human Language Technologies, pages 384–388. Fei Huang. 2009. Confidence measure for word alignment. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2Volume 2, pages 932–940. Association for Computational Linguistics. Herbert Jaeger. 2001. The echo state approach to analysing and training recurrent neural networks-with an erratum note. Bonn, Germany: German National Research Center for Information Technology GMD Technical Report, 148:34. Maxim Khalilov and Jos´e AR Fonollosa. 2011. Syntax-based reordering for statistical machine translation. Computer speech & language, 25(4):761–788. Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. 855 Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondˇrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: open source toolkit for statistical machine translation. In Proceedings of the 45th Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, ACL ’07, pages 177–180, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn. 2005. Europarl: A parallel corpus for statistical machine translation. In Conference Proceedings: the tenth Machine Translation Summit, pages 79–86, Phuket, Thailand. AAMT, AAMT. Uri Lerner and Slav Petrov. 2013. Source-side classifier preordering for machine translation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP ’13). Antonio Valerio Miceli Barone and Giuseppe Attardi. 2013. Pre-reordering for machine translation using transition-based walks on dependency parse trees. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 164–169, Sofia, Bulgaria, August. Association for Computational Linguistics. Tomas Mikolov, Martin Karafi´at, Lukas Burget, Jan Cernock`y, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In INTERSPEECH, pages 1045– 1048. Jiri Navratil, Karthik Visweswariah, and Ananthakrishnan Ramanathan. 2012. A comparison of syntactic reordering methods for english-german machine translation. In COLING, pages 2043–2058. Franz Josef Och, Christoph Tillmann, Hermann Ney, et al. 1999. Improved alignment models for statistical machine translation. In Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20–28. Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, pages 433–440. Association for Computational Linguistics. Ralf Steinberger, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Tomaz Erjavec, Dan Tufis, and Dniel Varga. 2006. The jrc-acquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of the 5th International Conference on Language Resources and Evaluation (LREC’2006), Genoa, Italy. Roy Tromble and Jason Eisner. 2009. Learning linear ordering problems for better translation. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 2 - Volume 2, EMNLP ’09, pages 1007–1016, Stroudsburg, PA, USA. Association for Computational Linguistics. Karthik Visweswariah, Rajakrishnan Rajkumar, Ankur Gandhe, Ananthakrishnan Ramanathan, and Jiri Navratil. 2011. A word reordering model for improved machine translation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP ’11, pages 486–496, Stroudsburg, PA, USA. Association for Computational Linguistics. Matthew D Zeiler. 2012. Adadelta: An adaptive learning rate method. arXiv preprint arXiv:1212.5701. 856
2015
82
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 857–866, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Detecting Deceptive Groups Using Conversations and Network Analysis Dian Yu1, Yulia Tyshchuk2, Heng Ji1, William Wallace2 1Computer Science Department, Rensselaer Polytechnic Institute 2Department of Industrial and Systems Engineering, Rensselaer Polytechnic Institute 1,2{yud2,tyshcy,jih,wallaw}@rpi.edu Abstract Deception detection has been formulated as a supervised binary classification problem on single documents. However, in daily life, millions of fraud cases involve detailed conversations between deceivers and victims. Deceivers may dynamically adjust their deceptive statements according to the reactions of victims. In addition, people may form groups and collaborate to deceive others. In this paper, we seek to identify deceptive groups from their conversations. We propose a novel subgroup detection method that combines linguistic signals and signed network analysis for dynamic clustering. A social-elimination game called Killer Game is introduced as a case study1. Experimental results demonstrate that our approach significantly outperforms human voting and state-of-theart subgroup detection methods at dynamically differentiating the deceptive groups from truth-tellers. 1 Introduction Deception generally entails messages and information intentionally transmitted to create a false conclusion (Buller et al., 1994). Deception detection is an important task for a wide range of applications including law enforcement, intelligence gathering, and financial fraud. Most of the previous work (e.g., (Ott et al., 2011; Feng et al., 2012)) focused on content analysis of a single document in isolation (e.g., a product review). The promoters of a product may post fake complimentary reviews, while their competitors may hire people to write fake negative reviews (Ott et al., 2011). 1The data set is publicly available for research purposes at: http://nlp.cs.rpi.edu/data/killer.zip However, when we want to detect deception from text or voice conversations, the deception behavior may be affected by the following factors beyond textual statements. 1. Dynamic. Recent research in social science suggests that deception communication is dynamic and involves interactions among people (e.g., (Buller and Burgoon, 1996)). Additionally, the research postulates that human’s capacity to learn by observation enables him to acquire large, integrated units of behavior by example (Bandura, 1971). Therefore, a person’s behavior concerning deception or truth-telling can change constantly, while he learns from others’ statements during conversations. 2. Global. People may form groups for purpose of deception. Research in social psychology has shown that an individual’s object-related behavior may be affected by the attitudes of other people due to group dynamics (Friedkin, 2010). Recent studies typically have been conducted over “static” written or oral deceptive statements. There is no obligatory requirement for communication between the author and the readers of these statements (Yancheva and Rudzicz, 2013). As a result, a victim of deception tends to trust the story mainly based on the statement he reads (Ott et al., 2011). However, in daily life, millions of fraud cases involve detailed conversations between deceivers and victims. A deceiver may make a statement, which is partially true in order to deceive or mislead victims and adjust his deceptive strategies based on the reactions of victims (Zhou et al., 2004). Therefore, it is more challenging to identity a deceiver in an interactive process of deception. Most deception detection research addressed individual deceivers, but deceivers often act in pairs or larger groups (Vrij et al., 2010). The interac857 Identify a player’s attitude toward other players based on his statement during each round Clustering ① ② ③ Subgroups ① ② ③ Signed Network (each round) 1 1 -1 -1 -1 -1 ① ② ③ Player Attitude Profile (each round) ① ② ③ ① ② ③ 1 −1 1 −1 1 −1 1 −1 1 Partition Subgroups ① ② ③ Cluster Ensembles Subgroups ① ② ③ Figure 1: Deceptive group detection for a single round. tions within a deceptive group have been ignored. For example, a product review from a deceiver may be supported by his teammates so that his deceptive comments can be read by more potential buyers. In this case, we can identify a deceptive group based on their collaborations and common characteristics, which is more promising than the typical methods of classifying individual statements as deceptive or trustworthy. In order to identify deceptive groups by analyzing the evolution of a person’s deception strategy during his interactions with victims and the interactions within the deceptive group from conversations, we use a social-elimination game called Killer Game which contains the ground-truth of subgroups. The killer game has many variants that involve different roles and skills. We choose a classical version played by three roles/teams: detectives, citizens, and killers. The role of each player (game participant) is randomly assigned by a third-party game judge. Every killer/detective is given the identities of his teammates. There are two alternating phases of the game: “night”, when killers may covertly “murder” a player and detectives may learn one player’s role; and “day”, when surviving players are informed of who was killed last “night” and then asked to speculate about the roles of other surviving players. Before a “day” ends, every surviving player should vote for a suspect. The candidate with the most votes is eliminated. A player’s identity is not exposed after his “death”. The game continues until all killers have been eliminated or all detectives have been killed. The killers are treated as deceivers, and citizens and detectives as truth-tellers. In this paper, we present an unsupervised approach for differentiating the deceptive groups from truth-tellers in a game. During each round, we use Natural Language Processing (NLP) techniques to identify a player’s attitude toward other players (Section 2), which are used to construct a vector of attitudes for each surviving player (Section 3.1) and a signed social network representation (Section 3.2) for the discussions. Then we use a clustering algorithm to cluster the attitude vector space and obtain results for each round (Section 3.1). We also implement a greedy optimization algorithm to partition the singed network based on the attitude clustering result (Section 3.2). Finally, we apply a pairwise-similarity approach that makes use of the predicted cooccurrence relations between players to combine all results from each round (Section 3.3). Figure 1 provides an overview of our system pipeline. The major novel contributions of this paper are as follows. • This is the first study to investigate conversations and deceptive groups for computerized deception detection. • The proposed clustering technique is shown to be successful in separating deceptive groups from truth-tellers. • The method can be applied to dynamically detect subgroups in a network with discussants who tend to change their opinions. 2 Attitude Identification In this section, we describe how we take a player’s statement in a single round as input to extract his attitudes toward other players and represent them by an attitude 3-tuple (speaker, target, polarity) list. For this work, the polarity of attitudes (Balahur et al., 2009) can be positive (1), negative (-1) or neutral (0). A game log from a single round 858 will be used as our illustrative example, as shown in Figure 2. don’t think 2 is a killer. I doubt 7’s intention. Please vote for 7. 14(D): 10, 13, 16 are good. I don’t think 7 must be a killer. 2 is obviously bad. I’m a citizen. System: 16, 11, 14, 7, 1, 3, 8, 12, 4 vote for 2 · · · 10, 13, 5, 2 vote for 7 · · · 9, 6 vote for 11 · · · 2 is out. statement in Figure 2, its attitude tuple list is: [(16, 1Each player has a game ID, assigned by the online game system based on when s/he entered the game room. 2e.g., http://www.3j3f.com/how/ Figure 2: Killer game sample log (1st round). C: CITIZEN; D: DETECTIVE; K: KILLER System: First Round. System: 15 was killed last night. 15, please leave your last words. 15(C): I’m a citizen. Over. 16(K): I’m a good person. 11 and 2 are suspicious. 1(K): I’m a good person. It has been a long time since I played as a killer. I’m a citizen. 11 is suspicious and I don’t want to comment on 16’s statement. 2(C): I’m a detective. 6 was proved as a killer last night. Over. 3(C): I don’t know 2’s identity. It’s hard to judge 16’s statement. 1 seems to be a good person. I’m a citizen. 4(C): Citizen. I cannot find a killer. I trust 2 since 2 sounds a good person. 16 is suspicious. I regard 16 as a killer. I’m 2’s teammate. 5(D): I’m a detective. I verify 2’s identity and 2 is a killer. 13 is good. 6(C): Why do you want to attack 2? I don’t understand. 14 is suspicious. 7(K): It’s hard to define 6’s identity. 4 may be a citizen. I will vote for 2. 6 sounds very weird and I found 6 very suspicious. I will follow the detective 5 to vote for 2. 8(C): We should calm down. 7 seems to be a bad person. 9(C): 1 and 7 seem to be killers. There is no evidence to support 2 as a detective. 3 is a citizen. 4 is possibly a detective. 6 is also good. 10(D): I agree with you. 7 must be a killer. 2 and 7 should debate. 11(C): I don’t know 2 but I think 2 is good. 3 is good. There should be one or two killers among 1, 4 and 7. 12(K): 11 sounds like a killer. 2 is a killer. I’m a citizen. Vote for 2. 13(D): 15 is a citizen. 16 is logically good. I think 1, 8, 9, 10 are OK. I don’t think 2 is a killer. I doubt 7’s intention. Please vote for 7. 14(D): 10, 13, 16 are good. I don’t think 7 must be a killer. 2 is obviously bad. I’m a citizen. System: 16, 11, 14, 7, 1, 3, 8, 12, 4 vote for 2 · · · 10, 13, 5, 2 vote for 7 · · · 9, 6 vote for 11 · · · 2 is out. 2.1 Target and Attitude Word Identification We start by identifying targets and attitude words from conversations. In the killer game, a target is tences that include at least one attitude word from a player’s statement during each round. We develop a rule-based approach for attitudetarget pairing: if there is at least one ID in the sentence, we associate all attitude words in that sentence with it. Otherwise, if ”I” is the only subject or there are no subjects at all, we associate attitude words with the ID of the speaker. We reverse the polarity of an attitude word if it appears in a negation context. Previous methods pair a target and an attitude word if they satisfy at least one dependency rules (e.g.,(Somasundaran and Wiebe, 2009)). We check the POS tag sequence between them. For each attitude-target pair, if there exists an attitude word, a belief-oriented verb such as “think”, “believe”, “feel”, or more than two verbs in the sequence, we will discard this pair. The assumption is that POS tag sequences can be used to summarize dependency rules when statements are relatively short. For those targets, the speaker didn’t mention or there is no positive/negative attitude word used when they are mentioned, the attitude polarity score is set to 0. For instance, given Player 16’s statement in Figure 2, its attitude tuple list is: [(16, 16, +1), (16, 11, -1), (16, 2, -1), (16, 1, 0), (16, 3, 0), ..., (16, 15, 0)]. 1Each player has a game ID, assigned by the online game system based on when s/he entered the game room. 2e.g., http://www.3j3f.com/how/ Figure 2: Killer game sample log (the 1st round). 2.1 Target and Attitude Word Identification We start by identifying targets and attitude words from conversations. In the killer game, a target is represented by his unique ID2 and game terms are regarded as attitude words. We collected 41 terms in total from the game’s website 3 and related discussion forum posts. ICTCLAS (Zhang et al., 2003) is used for word segmentation and part-of-speech (POS) tagging. There are two kinds of game terms: positive and negative. Positive terms include “citizen”, “good person”, “good person certified by the detectives” and “detective”. Negative terms include “killer”, “killer verified by the detectives” and “a killer who claimed himself/herself to be a detective”. We assign the polarity score +1, -1 to positive and negative terms respectively. 2Each player has a game ID, assigned by the online game system based on when he entered the game room. 3e.g., http://www.3j3f.com/how/ 2.2 Attitude-Target Pairing Then we associate each attitude word with its corresponding target. We remove interrogative and exclamatory sentences and only keep the sentences that include at least one attitude word from a player’s statement during each round. We develop a rule-based approach for attitudetarget pairing: if there is at least one ID in the sentence, we associate all attitude words in that sentence with it. Otherwise, if “I” is the only subject or there are no subjects at all, we associate attitude words with the ID of the speaker. We reverse the polarity of an attitude word if it appears in a negation context. Previous methods pair a target and an attitude word if they satisfy at least one dependency rule (e.g., (Somasundaran and Wiebe, 2009)). We check the POS tag sequence between them. For each attitude-target pair, if there exists an attitude word, a belief-oriented verb such as “think”, “believe”, “feel”, or more than two verbs in the sequence, we will discard this pair. The assumption is that POS tag sequences can be used to summarize dependency rules when statements are relatively short. For those targets, the speaker didn’t mention or there is no positive/negative attitude word used when they are mentioned, the attitude polarity score is set to 0. For instance, given Player 16’s statement in Figure 2, its attitude tuple list is: [(16, 16, +1), (16, 11, -1), (16, 2, -1), (16, 1, 0), (16, 3, 0), ..., (16, 15, 0)]. 3 Clustering Since the statements in conversations are relatively short and concise, it is difficult to identify which one is deceptive, even using deep linguistic features such as the language style. In this section, we introduce a method to construct an attitude profile for each player and a signed network based on the attitude tuple list in Section 2, and combine them to analyze a dynamic network with discussants telling lies and truths. 3.1 Clustering based on Attitude Profile We use a vector containing numerical values to represent each player’s attitude toward identified targets in each round. The values correspond to the polarity scores in a player’s attitude tuple list. For example, the polarity score of player 16’s attitude toward target 11 is −1 as shown in Figure 2. 859 We call this vector as the discussant attitude profile (DAP) following (Abu-Jbara et al., 2012a). Suppose there are n players who participate in a single game. Since a player’s identity is not exposed to the public after his death4, people can still analyze the identity of a “dead” player. Therefore, the number of possibly mentioned targets in each round equals to n. Given all the statements from m surviving players in a single round, each player’s DAP has n + 1 dimensions including his vote and thus we can have a m × (n + 1) attitude matrix A where Aij represents the attitude polarity of i toward j we got from Section 2. Ai(n+1) represents i’s vote. In a certain round, given a set of m surviving players X = {x1, x2, · · · , xm} to be clustered and their respective DAPs, we can modify the Euclidean metric to compute the differences in attitudes and get an m × m distance matrix M: Mij = v u u t n X k=1 (Aik −Ajk)2 + (2 −2δAi(n+1),Aj(n+1))2 (1) The Kronecker delta function δ is: δij =  1 i = j 0 i ̸= j (2) We use this function to compare the votes of two players separately because a player’s vote can be inconsistent with his previous statements. We assume that there is a larger distance between two players when they vote for different suspects. A common assumption in previous research was that a member is more likely to show a positive attitude toward other members in the same group, and a negative attitude toward the opposing groups (Abu-Jbara et al., 2012a). However, a deceiver may pretend to be innocent by supporting those truth-tellers and attacking his teammates, whose identities have already been exposed. Therefore, it is not enough to judge the relationship between two players by simply measuring the distance between their DAPs. In addition to comparing DAPs between players i and j, we also consider the attitudes of other players toward i and j, as well as their attitudes 4Each round, the player killed by killers and the player with the most votes are out. toward each other. We modify Mij as follows and show it in Figure 3: M ′ ij = Mij + v u u t m X k=1 (Aki −Akj)2 + (h(Aij) + h(Aji))2 (3) where the function h detects the negative attitudes. h(x) = 0 if x ≥0 and h(x) = −1 otherwise. We perform hierarchical clustering on the condensed distance matrix of M and use the complete linkage method to compute the distance between two clusters (Voorhees, 1986). We set the number of clusters as 3 since there are three natural groups in the game. We focus on separating deceivers (killers) from truth-tellers (citizens and detectives). 𝑖 𝑗 𝑖 𝑗 compare 𝑖 and 𝑗′s DAPs Figure 3: Computation of the distance between player i and j based on the attitude matrix. 3.2 Signed Network Partition When we computed the distance between two players in Section 3.1, we did not consider the network structure among all the players. For example, if A supports C, B supports D and C and D dislike each other, A and B may belong to different groups. Thus, we propose to capture the interactions in the social network to further improve the attitude-profile-based clustering result. We can easily convert the attitude matrix A into a signed network by adding a directed edge i →j between i and j if Aij ̸= 0. We denote a directed graph corresponding to a signed network as G = (V, S, N, W), where V is the set of nodes, S is the set of positive edges, N is the set of negative edges and W : (V × V ) →{−1, 1} is a function that maps every directed edge to a value, W(i, j) = Aij. We use a greedy optimization algorithm (Doreian and Mrvar, 1996) to find partitions. A criterion function for an optimal partitioning procedure 860 is constructed such that positive links are dense within groups and negative links are dense between groups. For any potential partition C, we seek to minimize the following error function: E(C) = X C∈C [(1 −γ) X i∈C j /∈C W(i, j)Si,j −γ X i,j∈C W(i, j)Ni,j] (4) where γ ∈[0, 1] controls the balance of the penalty difference between putting a positive edge across and a negative edge within a group. We regard these two types of errors as equally important and set γ = 0.5 for our experiments. Initially, we use the clustering result in Section 3.1 to partition nodes into three different groups and an error function, E, is evaluated for that cluster. Every cluster has a set of neighbor clusters in the cluster space. A neighbor cluster is obtained by moving a node from one group to another, or exchanging two nodes in two different groups. E is evaluated for all the neighbor clusters of the current cluster and the one with the lowest value is set as the new cluster. The algorithm is repeated until it finds a minimal solution5. We set the upper limit for the number of subgroups to 3. 3.3 Cluster Ensembles The relationships between players are dynamic throughout the game. For example, a killer tends to hide his identity and pretends to be friendly to others at later stages in order to survive. Thus, it is insufficient to rely on a single round’s discussion to cluster players. In addition, for each single round, we also need to combine the clustering results from the attitude profiles of the players and the signed network. In a game with information gathered from up to r rounds, let P = {P1, P2, · · · , Pr} be the set of r clusterings (partitionings) based on attitude profiles and P ′ = {P ′ 1, P ′ 2, · · · , P ′ r} be the set of r clusterings based on the signed network. Using the co-occurrence relations between players, we can generate a n × n pairwise similarity matrix T based on the information of all r rounds: T r ij = λ · voteij + (1 −λ) · vote ′ ij rij (5) 5Since our graphs are small, we search through all partitions. We repeated 1000 times in our experiment. where voteij, vote ′ ij are the number of times that player i and j are assigned to the same cluster in P and P ′ respectively. rij denotes the number of rounds when both of them survived (rij ≤r). T r ij ∈[0, 1]. We assign a higher weight to the result of P1 and set λ = 2/3 in our experiments. Given the input in Figure 2, x3 and x4 are assigned to the same cluster in P1 (vote34 = 1) and in P ′ 1 (vote ′ 34 = 1) respectively as shown in Figure 4. x3 and x4 co-occurred in the first round (r34 = 1). T 1 34 = (2/3 × 1 + 1/3 × 1)/1 = 1. 𝑥16 𝑷𝟏 𝑷′𝟏 Round 1 𝑥2 𝑥11 𝑥1 𝑥7 𝑥12 𝑥14 𝑥3 𝑥15 𝑥4 𝑥5 𝑥13 𝑥10 𝑥8 𝑥9 𝑥6 𝑥2 𝑥11 𝑥14 𝑥3 𝑥15 𝑥4 𝑥5 𝑥13 𝑥10 𝑥8 𝑥9 𝑥6 𝑥16 𝑥1 𝑥7 𝑥12 KILLER CITIZEN OR DETECTIVE Figure 4: Example of cluster ensemble for a single round. We apply hierarchical clustering (Voorhees, 1986) to the similarity matrix above to obtain the final global clustering results. 4 Experiments 4.1 Dataset Construction We recorded 10 games from 3J3F6, one of the most popular Chinese online killer game websites 7. A screenshot of the game system interface is shown in Figure 5. There are 16 participating players per game: 4 detectives, 4 killers and 8 citizens. Each player occupies a position in 1 . All the surviving players can express their attitudes via a voice channel using 2 , while detectives and killers can also communicate with teammates in their respective private team channels 3 via texts. The system provides real-time updates on the game progress, voting results, and so on using the public channel 4 . We manually transcribed speech and stored the text information in the public channel, which contains the voting and death information. The average game length 6http://www.3j3f.com 7All data sets and resources will be made available for research purposes upon the acceptance of the paper. 861 Game # Purity (%) Entropy D N H eD eD + N D N H eD eD + N 1 68.8 75.0 75.0 68.8 75.0 0.48 0.50 0.78 0.63 0.50 2 75.0 68.8 68.8 43.8 81.3 0.71 0.69 0.81 0.73 0.43 3 43.8 81.3 56.3 75.0 75.0 0.77 0.67 0.81 0.72 0.72 4 75.0 62.5 75.0 93.8 93.8 0.78 0.68 0.74 0.28 0.28 5 62.5 75.0 81.3 75.0 75.0 0.61 0.50 0.61 0.72 0.72 6 81.3 81.3 75.0 81.3 81.3 0.64 0.38 0.74 0.60 0.60 7 81.3 75.0 81.3 81.3 87.5 0.65 0.70 0.68 0.51 0.51 8 87.5 75.0 75.0 93.8 93.8 0.41 0.73 0.78 0.23 0.23 9 75.0 43.8 75.0 81.3 87.5 0.76 0.80 0.78 0.67 0.49 10 62.5 75.0 87.5 81.3 81.3 0.78 0.60 0.51 0.61 0.67 Average 71.3 71.3 75.0 77.5 83.2 0.66 0.62 0.72 0.57 0.51 Table 1: Results on subgroup detection. D refers to DAPC, N refers to Network, H refers to Human Voting, and eD refers to extended DAPC. is about 76.3 minutes and there are on average 5 rounds and 411 sentences per game. Note that our method is language-independent and could easily be adapted to other languages. Current Speaker: 14 TEAM CHANNEL PUBLIC CHANNEL START END OUT 1 2 3 4 Figure 5: Screenshot of the online killer game interface. 4.2 Evaluation Metrics We use two metrics to evaluate the clustering accuracy: Purity and Entropy. Purity (Manning et al., 2008) is a metric in which each cluster is assigned to the class with the majority vote in the cluster, and then the accuracy of this assignment is measured by dividing the number of correctly assigned instances by the total number of instances N. More formally: purity(Ω, C) = 1 N X k maxj|wk ∩cj| (6) where Ω= {w1, w2, · · · , wk} is the set of clusters and C = {c1, c2, · · · , cj} is the set of classes. wk is interpreted as the set of instances in wk and cj is the set of instances in cj. The purity increases as the quality of clustering improves. Entropy (Steinbach et al., 2000) measures the uniformity of a cluster. The entropy for all clusters is defined by the weighted sum of the entropy of each cluster: Entropy = − j X nj n i X P(i, j) × log2P(i, j) (7) where P(i, j) is the probability of finding an element from the category i in the cluster j, nj is the number of items in cluster j and n is the total number of items in the distribution. The entropy decreases as the quality of clustering improves. 4.3 Overall Performance We compare our approach with two state-of-theart subgroup detection methods and human performance as follows: 1. DAPC: In Section 3.1, we introduced our implementation of the discussant attitude profile clustering (DAPC) method proposed in (AbuJbara et al., 2012a). In the original DAPC method, for each opinion target, there are 3 dimensions in the feature vector, corresponding to (1) the number of positive expressions, (2) negative expressions toward the target from the online posts and (3) the number of times the discussant mentioned the target. For our experiment, we only keep one dimension representing the discussant’s attitude (positive, negative, neutral) toward the target since a discussant attitude remains the same in his statement within a single round. 2. Network: We also implemented the signed network partition method for subgroup detection proposed by (Hassan et al., 2012). To determine the number of subgroups t, we set an upper limit of t = 3 in order to minimize the optimization function. 862 3. Human Voting: We also compare our methods with human voting results. There are two subgroups based on the voting results. The players with the highest votes each round belong to one subgroup and the rest of the players are in the other subgroup. Table 1 shows the overall performance of various methods on subgroup detection and Figure 6 depicts the average performance. We can see that our method significantly outperforms two baseline methods and human voting. The human performance is not satisfying, which indicates it’s very challenging even for a human to identify a deceiver whose deceptive statement is mixed with plenty of truthful opinions (Xu and Zhao, 2012). 1 Human_Voting BL_DAPC BL_Network EDAPC EDAPC+Network 50 55 60 65 70 75 80 85 % Method Purity Entropy Figure 6: An overview of the average performance of all the methods. By extending the DAPC method (EDPAC), we can estimate the distance between two players more accurately by considering the attitudes of other players toward them and their attitudes toward each other. Given the log in Figure 2 as input, players 5 (detective) and 7 (killer) are clustered into one group when DAPC is applied since they don’t have conflicting views on the identities of other players. However, 5 voted for 7 and is supported by more players compared with 7, which indicates that they are less likely to be teammates. We can successfully separate them after re-computing the distance between them. Adding network information provided 5.7% further gain in Purity. In some cases, the performance remains the same when EDAPC clustering result is already optimal with the minimum value of the criterion function. 4.4 Dynamic Subgroup Detection As shown in Figure 7, the performance of our approach improves as the game proceeds. Players seldom maintain their opinions throughout a game. Figure 2 shows that most killers (16,1,12) insisted that citizen 11 should be a killer except 7. As a response to the group pressure (Asch, 1951), 7 changed his opinion and stated that 11 could be a killer in the following round. In reality, a discussant who participates in an online discussion tends to change his opinions about a target as he learns more information, which shows both the necessity and importance of the dynamic detection of subgroups. Our method can be applied to detect subgroups dynamically by grouping posts into multiple discussion “rounds” based on their timestamps. 1 Purity Entropy 50 60 70 80 % 1st round 1st + 2nd rounds all rounds Figure 7: Average performance based on different rounds. 5 Related Work 5.1 Opinion Analysis Our work on mining a player’s attitude toward other players is related to opinion mining. Attitudes and opinions are related and can be regarded as the same in our task. Compared with the previous work (e.g.,(Qiu et al., 2011; Kim and Hovy, 2006)), the opinion words and targets in our task are relatively easier to recognize due to the simplicity of statements. Some recent work (e.g., (Somasundaran and Wiebe, 2009; Abu-Jbara et al., 2012a)) developed syntactic rules to pair an opinion word and a target if they satisfy at least one specific dependency rule. We use POS tag sequences to efficiently help us filter out irrelevant pairs. 863 5.2 Deception Detection Most of the previous computational work for deception detection used supervised/semisupervised classification methods (Li et al., 2013b). Besides lexical and syntactical features (Ott et al., 2011; Feng et al., 2012; Yancheva and Rudzicz, 2013), Feng and Hirst (2013) proposed using profile compatibility to distinguish fake and genuine reviews. Xu and Zhao (2012) used deep linguistic features such as text genre to detect deceptive opinion spams. Banerjee et al. (2014) used extended linguistic signals such as keystroke patterns. Li et al. (2013a) used topic models to detect the difference between deceptive and truthful topic-word distribution. Researchers have began to realize the importance of analyzing computer-mediated communication in deception detection. Zhou and Sung (2008) conducted an empirical study on deception cues using the killer game as a task scenario and obtained many interesting findings (e.g., deceivers send fewer messages than truth-tellers). Our work is most related to the work of Chittaranjan and Hung (2010) on detecting deceptive roles in the Werewolf Game which is another variant of the killer game. They created a Werewolf data set by audio-visual recording 8 games played by 2 groups of people face-to-face and extracted audio features and interaction features for their experiments. However, we should note that non face-to-face deception detection emphasizes verbal and linguistic cues over less controllable nonverbal communication cues (Walther, 1996). 5.3 Subgroup Detection In online discussions, people usually split into subgroups based on various topics. The member of a subgroup is more likely to show positive attitude to the members of the same subgroup, and negative attitude to the members of opposing subgroups (Abu-Jbara et al., 2012a). Previous work also studied subgroup detection in social media sites. Abu-Jbara et al. (2012a) constructed a discussant attitude profile (DAP) for each discussant and then used clustering techniques to cluster their attitudes. Hassan et al. (2012; 2012b; 2013) proposed various methods to automatically construct a signed social network representation of discussions and then identify subgroups by partitioning their signed networks. Qiu et al. (2013) applied collaborative filtering through Probabilistic Matrix Factorization (PMF) to generalize and improve extracted opinion matrices. An underlying assumption of the previous work was that a participant will not tell lies nor hide his own stance. Moreover, their work did not take into account that a person’s attitude or stance will change as he learns more by reading the comments from others and acquiring more background knowledge (Bandura, 1971). Our contribution is that we extend the DAP method and combine it with the signed network partition in order to cluster the hidden group members. We also develop a novel cluster ensemble approach in order to analyze the dynamic network. 6 Conclusions and Future Work Using the killer game as a case study, we present an effective clustering method to detect subgroups from dynamic conversations with lies and truths. This is the first work to utilize the dynamics of group conversations for deception detection. Experiments demonstrated that truth-tellers and deceptive groups are separable and the proposed method significantly outperforms baseline approaches and human voting. Our work builds a pathway to future work in deception detection in content-rich dynamic environments such as electronic commerce and repeated interrogation which will require sophisticated content and network analysis. In real-life suspects may be interrogated about particular events on numerous occasions. Our method can potentially be modified to find criminals who act in groups based on their statements. Other applications of this research include law enforcement, financial fraud, fraudulent ad campaigns and social engineering. This study focuses on analyzing the verbal content in conversations. It will be interesting to study non-verbal features such as blink rate, gaze aversion and pauses (Granhag and Str¨omwall, 2002) when people play this game face-to-face and combine the non-verbal and verbal features for deception detection. In addition, it is worth exploring the impact of cross-cultural analysis in detecting deception. When attempting to detect deceit in people of other ethnic origin than themselves, people perform even worse in terms of lie detection accuracy than when judging people of their own ethnic origin (Vrij, 2000). For the future work, we aim to use automatic prediction of deceivers to help truth-tellers win games more easily. 864 Acknowledgement This work was supported by the U.S. DARPA DEFT Program No. FA8750-13-2-0041, ARL NS-CTA No. W911NF-09-2-0053, NSF Awards IIS-0953149 and IIS-1523198, AFRL DREAM project, gift awards from IBM, Google, Disney and Bosch. The views and conclusions contained in this document are those of the authors and should not be interpreted as representing the official policies, either expressed or implied, of the U.S. Government. The U.S. Government is authorized to reproduce and distribute reprints for Government purposes notwithstanding any copyright notation here on. References A. Abu-Jbara, M. Diab, P. Dasigi, and D. Radev. 2012a. Subgroup detection in ideological discussions. In Proc. Annual Meeting of the Association for Computational Linguistics (ACL 2012). A. Abu-Jbara, A. Hassan, and D. Radev. 2012b. Attitudeminer: mining attitude from online discussions. In Proc. North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT 2012). A. Abu-Jbara, B. King, M. Diab, and D. Radev. 2013. Identifying opinion subgroups in arabic online discussions. In Proc. Association for Computational Linguistics (ACL 2013). S. Asch. 1951. Effects of group pressure upon the modification and distortion of judgments. Groups, leadership, and men. S. A. Balahur, R. Steinberger, E. Goot, B. Pouliquen, and M. Kabadjov. 2009. Opinion mining on newspaper quotations. In IEEE/WIC/ACM International Joint Conferences on Web Intelligence and Intelligent Agent Technologies (WI-IAT 2009). A. Bandura. 1971. Social Learning Theory. General Learning Corporation. R. Banerjee, S. Feng, J. Kang, and Y. Choi. 2014. Keystroke patterns as prosody in digital writings: A case study with deceptive reviews and essays. In Proc. Empirical Methods on Natural Language Processing (EMNLP 2014). D. Buller and J. Burgoon. 1996. Interpersonal deception theory. Communication theory. David B Buller, Judee K Burgoon, JA Daly, and JM Wiemann. 1994. Deception: Strategic and nonstrategic communication. Strategic interpersonal communication. G. Chittaranjan and H. Hung. 2010. Are you awerewolf? detecting deceptive roles and outcomes in a conversational role-playing game. In IEEE International Conference on Acoustics Speech and Signal Processing (ICASSP 2010). P. Doreian and A. Mrvar. 1996. A partitioning approach to structural balance. Social networks. V. Feng and G. Hirst. 2013. Detecting deceptive opinions with profile compatibility. In Proc. International Joint Conference on Natural Language Processing (IJCNLP 2013). S. Feng, R. Banerjee, and Y. Choi. 2012. Syntactic stylometry for deception detection. In Proc. Association for Computational Linguistics (ACL 2012). N. E. Friedkin. 2010. The attitude-behavior linkage in behavioral cascades. Social Psychology Quarterly. P. Granhag and L. Str¨omwall. 2002. Repeated interrogations: verbal and non-verbal cues to deception. Applied Cognitive Psychology. A. Hassan, A. Abu-Jbara, and D. Radev. 2012. Detecting subgroups in online discussions by modeling positive and negative relations among participants. In Proc. Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL 2012). S. Kim and E. Hovy. 2006. Extracting opinions, opinion holders, and topics expressed in online news media text. In Proc. ACL-COLING 2006 Workshop on Sentiment and Subjectivity in Text. J. Li, C. Cardie, and S. Li. 2013a. Topicspam: a topic-model based approach for spam detection. In Proc. Association for Computational Linguistics (ACL 2013). J. Li, M. Ott, and C. Cardie. 2013b. Identifying manipulated offerings on review portals. In Proc. Empirical Methods on Natural Language Processing (EMNLP 2013). C. Manning, P. Raghavan, and H. Sch¨utze. 2008. Introduction to information retrieval. Cambridge university press Cambridge. M. Ott, Y. Choi, C. Cardie, and J. Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proc. Association for Computational Linguistics (ACL 2011). G. Qiu, B. Liu, J. Bu, and C. Chen. 2011. Opinion word expansion and target extraction through double propagation. Computational linguistics. M. Qiu, L. Yang, and J. Jiang. 2013. Mining user relations from online discussions using sentiment analysis and probabilistic matrix factorization. In Proc. North American Chapter of the Association for Computational Linguistics - Human Language Technologies (NAACL HLT 2013). 865 S. Somasundaran and J. Wiebe. 2009. Recognizing stances in online debates. In Proc. Joint Conference of the Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. M. Steinbach, G. Karypis, V. Kumar, et al. 2000. A comparison of document clustering techniques. In Proc. KDD 2000 workshop on text mining. E. Voorhees. 1986. Implementing agglomerative hierarchic clustering algorithms for use in document retrieval. Information Processing & Management. A. Vrij, P. Granhag, and S. Porter. 2010. Pitfalls and opportunities in nonverbal and verbal lie detection. Psychological Science in the Public Interest. A. Vrij. 2000. Detecting lies and deceit: The psychology of lying and implications for professional practice. Wiley. J. Walther. 1996. Computer-mediated communication impersonal, interpersonal, and hyperpersonal interaction. Communication research. Q. Xu and H. Zhao. 2012. Using deep linguistic features for finding deceptive opinion spam. In Proc. International Conference on Computational Linguistics (COLING 2012). M. Yancheva and F. Rudzicz. 2013. Automatic detection of deception in child-produced speech using syntactic complexity features. In Proc. Association for Computational Linguistics (ACL 2013). H. Zhang, H. Yu, D. Xiong, and Q. Liu. 2003. Hhmmbased chinese lexical analyzer ictclas. In Proc. SIGHAN 2003 workshop on Chinese language processing. L. Zhou and Y. Sung. 2008. Cues to deception in online chinese groups. In Proc. Hawaii International Conference on System Sciences (HICSS 2008). L. Zhou, J Burgoon, J. Nunamaker, and D. Twitchell. 2004. Automating linguistics-based cues for detecting deception in text-based asynchronous computermediated communications. Group decision and negotiation. 866
2015
83
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 867–877, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics WikiKreator: Improving Wikipedia Stubs Automatically Siddhartha Banerjee The Pennsylvania State University Information Sciences and Technology University Park, PA, USA [email protected] Prasenjit Mitra Qatar Computing Research Institute Hamad Bin Khalifa University Doha, Qatar [email protected] Abstract Stubs on Wikipedia often lack comprehensive information. The huge cost of editing Wikipedia and the presence of only a limited number of active contributors curb the consistent growth of Wikipedia. In this work, we present WikiKreator, a system that is capable of generating content automatically to improve existing stubs on Wikipedia. The system has two components. First, a text classifier built using topic distribution vectors is used to assign content from the web to various sections on a Wikipedia article. Second, we propose a novel abstractive summarization technique based on an optimization framework that generates section-specific summaries for Wikipedia stubs. Experiments show that WikiKreator is capable of generating well-formed informative content. Further, automatically generated content from our system have been appended to Wikipedia stubs and the content has been retained successfully proving the effectiveness of our approach. 1 Introduction Wikipedia provides comprehensive information on various topics. However, a significant percentage of the articles are stubs1 that require extensive effort in terms of adding and editing content to transform them into complete articles. Ideally, we would like to create an automatic Wikipedia content generator, which can generate a comprehensive overview on any topic using available information from the web and append the generated content to the stubs. Addition of automatically generated content can provide a useful start1https://en.wikipedia.org/wiki/ Wikipedia:Stub ing point for contributors on Wikipedia, which can be improved upon later. Several approaches to automatically generate Wikipedia articles have been explored (Sauper and Barzilay, 2009; Banerjee et al., 2014; Yao et al., 2011). To the best of our knowledge, all the above mentioned methods identify information sources from the web using keywords and directly use the most relevant excerpts in the final article. Information from the web cannot be directly copied into Wikipedia due to copyright violation issues (Banerjee et al., 2014). Further, keyword search does not always satisfy information requirements (Baeza-Yates et al., 1999). To address the above-mentioned issues, we present WikiKreator – a system that can automatically generate content for Wikipedia stubs. First, WikiKreator does not operate using keyword search. Instead, we use a classifier trained using topic distribution features to identify relevant content for the stub. Topic-distribution features are more effective than keyword search as they can identify relevant content based on word distributions (Song et al., 2010). Second, we propose a novel abstractive summarization (Dalal and Malik, 2013) technique to summarize content from multiple snippets of relevant information.2 Figure 1 shows a stub that we attempt to improve using WikiKreator. Generally, in stubs, only the introductory content is available; other sections (s1, ..., sr) are absent. The stub also belongs to several categories (C1,C2, etc. in Figure) on Wikipedia. In this work, we address the following research question: Given the introductory content, the title of the stub and information on the categories - how can we transform the stub into a com2An example of our system’s output can be found here – https://en.wikipedia.org/wiki/2014_ Enterovirus_D68_outbreak – content was added on 5th Jan, 2015. The sections on Epidemiology, Causes and Prevention have been added using content automatically generated by our method. 867                                    • • • •                                                 !  "                                                         Figure 1: Overview of our word-graph based generation (left) to populate Wikipedia template (right) prehensive Wikipedia article? Our proposed approach consists of two stages. First, a text classifier assigns content retrieved from the web into specific sections of the Wikipedia article. We train the classifier using a set of articles within the same category. Currently, we limit the system to learn and assign content into the 10 most frequent sections in any given category. The training set includes content from the most frequent sections as instances and their corresponding section titles as the class labels. We extract topic distribution vectors using Latent Dirichlet Allocation (LDA) (Blei et al., 2003) and use the features to train a Random Forest (RF) Classifier (Liaw and Wiener, 2002). To gather web content relevant to the stub, we formulate queries and retrieve top 20 search results (pages) from Google. We use boilerplate detection (Kohlsch¨utter et al., 2010) to retain the important excerpts (text elements) from the pages. The RF classifier classifies the excerpts into one of the most frequent classes (section titles). Second, we develop a novel Integer Linear Programming (ILP) based abstractive summarization technique to generate text from the classified content. Previous work only included the most informative excerpt in the article (Sauper and Barzilay, 2009); in contrast, our abstractive summarization approach minimizes loss of information that should ideally be in an Wikipedia article by fusing content from several sentences. As shown in Figure 1, we construct a word-graph (Filippova, 2010) using all the sentences (WG1) assigned to a specific class (Epidemiology) by the classifier. Multiple paths (sentences) between the start and end nodes in the graph are generated (WG2). We represent the generated paths as variables in the ILP problem. The coefficients of each variable in the objective function of the ILP problem is obtained by combining the information score and the linguistic quality score of the path. We introduce several constraints into our ILP model. We limit the summary for each section to a maximum of 5 sentences. Further, we avoid redundant sentences in the summary that carry similar information. The solution to the optimization problem decides the paths that are selected in the final section summary. For example, in Figure 1, the final paths determined by the ILP solution, – 1 and 2 in WG2, are assigned to a section (sr), where (sr) is the section title Epidemiology. To the best of our knowledge, this work is the first to address the issue of generating content automatically to transform Wikipedia stubs into comprehensive articles. Further, we address the issue of abstractive text summarization for Wikipedia content generation. We evaluate our approach by generating articles in three different categories: Diseases and Disorders3, American Mathematicians4 and Software companies of the United States5. Our LDA-based classi3https://en.wikipedia.org/wiki/Category: Diseases_and_disorders 4https://en.wikipedia.org/wiki/Category: American_mathematicians 5https://en.wikipedia.org/wiki/Category: Software_companies_of_the_United_States 868 fier outperforms a TFIDF-based classifier in all the categories. We use ROUGE (Lin, 2004) to compare content generated by WikiKreator and the corresponding Wikipedia articles. The results of our evaluation confirm the benefits of using abstractive summarization for content generation over approaches that do not use summarization. WikiKreator outperforms other comparable approaches significantly in terms of content selection. On ROUGE-1 scores, WikiKreator outperforms the perceptron-based baseline (Sauper and Barzilay, 2009) by ∼20%. We also analyze reviewer reactions, by appending content into several stubs on Wikipedia, most of which (∼77%) have been retained by reviewers. 2 Related Work Wikipedia has been used to compute semantic relatedness (Gabrilovich and Markovitch, 2007), index topics (Medelyan et al., 2008), etc. However, the problem of enhancing the content of a Wikipedia article has not been addressed adequately. Learning structures of templates from the Wikipedia articles have been attempted in the past (Sauper and Barzilay, 2009; Yao et al., 2011). Both these efforts use queries to extract excerpts from the web and the excerpts ranked as the most relevant are added into the article. However, as already pointed out, current standards of Wikipedia requires rewriting of web content to avoid copyright violation issues. To address the issue of copyright violation, multi-document abstractive summarization is required. Various abstractive approaches have been proposed till date (Nenkova et al., 2011). However, these methods suffer from severe deficiencies. Template-based summarization methods work well, but, it assumes prior domain knowledge (Li et al., 2013). Writing style across articles vary widely; hence learning templates automatically is difficult. In addition, such techniques require handcrafted rules for sentence realization (Gerani et al., 2014). Alternatively, we can use text-to-text generation (T2T) (Ganitkevitch et al., 2011) techniques. WikiKreator constructs a word-graph structure similar to (Filippova, 2010) using all the sentences that are assigned to a particular section by a text classifier. Multiple paths (sentences) from the graph are generated. WikiKreator selects few sentences from this set of paths using an optimization problem formulation that jointly maximizes the informa                                          !   " #  "     $" %                                                       !%%&     Figure 2: WikiKreator System Architecture: Content Retrieval and Content Summarization tiveness and readability of section-specific snippets and generates output that is informative, wellformed and readable. 3 Proposed Approach Figure 2 shows the system architecture of WikiKreator. We are required to generate content to populate sections of the stubs (S1, S2, etc.) that belong to category C1. Categories on Wikipedia group together pages on similar subjects. Hence, categories characterize Wikipedia articles surprisingly well (Zesch and Gurevych, 2007). Naturally, we leverage knowledge existing in the categories to build our text classifier. To learn category specific templates, the system should learn from articles contained within the same or similar categories. WikiKreator learns category-specific templates using all the articles that can be reached using a top-down approach from the particular category. For example, in addition to C1, WikiKreator also learns templates from articles in C2 and C3 (the subcategories of C1). As shown in the Figure 2, we deploy a two stage process to generate content for a stub: [i] Content Retrieval and [ii] Content Summarization. In the first stage, our focus is to retrieve content that is relevant to the stub, say, S1 that belongs to C1. We extract all the articles that belong to C1 and the subcategories, namely, C2 and C3. A training set is created with the contents in the sections of the articles as instances and the section titles as the corresponding classes. Topic distribution vectors for each section content are generated using LDA (Blei et al., 2003). We train a Random Forest 869 (RF) classifier using the topic distribution vectors. As mentioned earlier, only the top 10 most frequent sections are considered for the multi-class classification task. We retrieve relevant excerpts from the web by formulating queries. The topic model infers the topic distribution features of each excerpt and the RF classifier predicts the section (s1, s2, etc.) of the excerpt. All web automation tasks are performed using HTMLUnit6. In the second stage, our ILP based summarization approach synthesizes information from multiple excerpts assigned to a section and presents the most informative and linguistically well-formed summary as the corresponding content for each section. A wordgraph is constructed that generates several sentences; only a few of the sentences are retained based on the ILP solution. The predicted section is entered in the stub article along with the final sentences selected by the ILP solution as the corresponding section-specific content on Wikipedia. 3.1 Content Retrieval Article Extraction: Wikipedia provides an API7 to download articles in the XML format. Given a category, the API is capable of extracting all the articles under it. We recursively extract articles by identifying all the categories in the hierarchy that can be reached by the crawler using top-down traversal. We use a simple python script8 to extract the section titles and the corresponding text content from the XML dump. Classification model: WikiKreator uses Latent Dirichlet Allocation (LDA) to represent each document as a vector of topic distributions. Each topic is further represented as a vector of probabilities of word distributions. Our intuition is that the topic distribution vectors of the same sections across different articles would be similar. Our objective is to learn these topic representations, such that we can accurately classify any web excerpt by inferring the topics in the text. Say C, a category on Wikipedia, has k Wikipedia articles (W). (C) = {W1, W2, W3, W4, ..., Wk} Each article Wj has several sections denoted as sjicji where sji and cji refer to the section title and content of the ith section in the jth article, respectively. We concentrate on the 10 most frequent 6http://htmlunit.sourceforge.net/ 7https://en.wikipedia.org/wiki/ Special:Export 8http://medialab.di.unipi.it/wiki/ Wikipedia_Extractor sections in any category. Training using content from sections that are not frequent might result in sub-optimal classification models. In our experiments, each frequent section had enough instances to optimally train a classifier. Let us denote the 10 most frequent sections in any category as S. If any sji from Wj exists in S, the content (cji) is included in the training set along with the section title (sji) as the corresponding class label. These steps are repeated for all the articles in the category. Each instance is then represented as: cji = {pji(t1), pji(t2), pji(t3), ..., pji(tm)} where m is the number of topics. sji is the corresponding label for this training instance. The set of topics are t1, t2, t3,. . ., tm while pji(tm) refers to the probability of topic m of content cji. Contents from the most frequent sections are each considered as a document and LDA is applied to generate document-topic distributions. We experiment with several values of m and use the value that generates the best classification model in each category. The topic vectors and the corresponding labels are used to train a Random Forest (RF) classifier. As the classes might be unbalanced, we apply resampling on the training set. Predicting sections: In this step, we search the web for relevant content on the stub and assign them to their respective sections. We formulate search queries to retrieve web pages using a search engine. We extract multiple excerpts from the pages and then the RF classifier predicts the class (section label) for each excerpt. (i) Query Generation: To search the web, we formulate queries by combining the stub title and keyphrases extracted from the first sentence of the introductory content of the stub. The first sentence generally contains the most important keywords that represent the article. Focused queries increases relevance of extraction as well as helps in disambiguation of content. We use the topia term extractor (Chatti et al., 2014) to extract keyphrases. For example, the query generated for a stub on Hereditary hyperbilirubinemia is Hereditary hyperbilirubinemia bilirubin metabolic disorder where bilirubin metabolic disorder are the keyphrases generated from the first sentence of the stub from Wikipedia. The query is used to identify the top 20 URLs (search results) from Google9. (ii) Boilerplate removal: Web content from the search results obtained in the previous step re9http://www.google.com 870 quires cleaning to retain only the relevant information. Removal of irrelevant content is done using boilerplate detection (Kohlsch¨utter et al., 2010). The web pages contain several excerpts (text elements) in between the HTML tags. Only the excerpts that are classified as relevant by the boilerplate detection technique are retained. (iii) Classification and assignment of excerpts: The LDA model generated earlier infers topic distribution of each excerpt based on word distributions. The RF classifier predicts the class (section title) for each excerpt based on the topic distribution. However, predictions that do not have a high level of confidence might lead to excerpts being appended to inappropriate sections. Therefore, we set the minimum confidence level at 0.5. If the prediction confidence of the RF classifier for a particular excerpt is above the minimum confidence level, the excerpt is assigned to the class; otherwise, the excerpt is discarded. In the next step, we apply summarization on the excerpts assigned to each section. 3.2 Content Summarization To summarize content for Wikipedia effectively, we formulate an ILP problem to generate abstractive summaries for each section with the objective of maximizing linguistic quality and information content. Word-graph: A word-graph is constructed using all the sentences included in the excerpts assigned to a particular section. We used the same technique to construct the word-graph as (Filippova, 2010) where the nodes represent the words (along with parts-of-speech (POS)) and directed edges between the nodes are added if the words are adjacent in the input sentences. Each sentence is connected to dummy start and end nodes to mark the beginning and ending of the sentences. The sentences from the excerpts are added to the graph in an iterative fashion. Once the first sentence is added, words from the following sentences are mapped onto a node in the graph provided that they have the exact same word form and the same POS tag. Inclusion of POS information prevents ungrammatical mappings. The words are added to the graph in the following order: • Content words are added for which there are no candidates in the existing graph; • Content words for which multiple mappings are possible or such words that occur more than once in the sentence; • Stopwords. If multiple mappings are possible, the context of the word is checked using word overlaps to the left and right within a window of two words. Eventually, the word is mapped to that node that has the highest context. We also changed Filippova’s method by adding punctuations as nodes to the graph. Figure 1 shows a simple example of the word-graph generation technique. We do not show POS and punctuations in the figure for the sake of clarity. The Figure also shows that several possible paths (sentences) exist between the dummy start and end nodes in the graph. Ideally, excerpts for any section would contain multiple common words as they belong to the same topic and have been assigned the same section. The presence of common words ensure that new sentences can be generated from the graph by fusing original set of sentences in the graph. Figure 1 shows an illustration of our approach where the set of sentences assigned to a particular section (WG1) are used to create the word-graph. The word-graph generates several possible paths between the dummy nodes; we show only three such paths (WG2). To obtain abstractive summaries, we remove generated paths from the graph that are same or very similar to any of the original sentences. If the cosine similarity of a generated path to any of the original sentences is greater than 0.8, we do not retain the path. We compute cosine similarity after applying stopword removal. However, we do not apply stemming as our graph construction is based on words existing in the same form in multiple sentences. Similar to Filippova’s work, we set the minimum path length (in words) to eight to avoid incomplete sentences. Paths without verbs are discarded. The final set of generated paths after discarding the ineligible ones are used in the next step of summary generation. 3.2.1 ILP based Path Selection Our goal is to select paths that maximize the informativeness and linguistic quality of the generated summaries. To select the best multiple possible sentences, we apply an overgenerate and select (Walker et al., 2001) strategy. We formulate an optimization problem that ‘selects’ a few of the many generated paths in between the dummy nodes from the word-graph. Let pi denote each path obtained from the word-graph. We introduce three different factors to judge the relevance of 871 a path – Local informativeness (Iloc(pi)), Global informativeness (Iglob(pi)) and Linguistic quality (LQ(pi)). Any sentence path should be relevant to the central topic of the article; this relevance is tackled using Iglob(pi). Iloc(pi) models the importance of a sentence among several possible sentences that are generated from the word-graph. Linguistic quality (LQ(pi)) is computed using a trigram language model (Song and Croft, 1999) that assigns a logarithmic score of probabilities of occurrences of three word sequences in the sentences. Local Informativeness: In principle, we can use any existing method that computes sentence importance to account for Local Informativeness. In our model, we use TextRank scores (Mihalcea and Tarau, 2004) to generate an importance value of each path. TextRank creates a graph of words from the sentences. The score of each node in the graph is calculated as shown in Equation (1): S(Vi) = (1 −d) + d × P Vj∈adj(Vi) wji P Vk∈adj(Vi) wjk S(Vi) (1) where Vi represents the words and adj(Vi) denotes the adjacent nodes of Vi. Setting d to 0.80 in our experiments provided the best content selection results. The computation convergences to return final word importance scores. The informativeness score of a path Iloc(pi) is obtained by adding the importance scores of the individual words in the path. Global Informativeness: To compute global informativeness, we compute the relevance of a sentence with respect to the query to assign higher weights to sentences that explicitly mention the main title or mention certain keywords that are relevant to the article. We compute the cosine similarity using TFIDF features between each sentence and the original query that was formulated during the web search stage. We define global informativeness as follows: Iglob(pi) = CosineSimilarity(Q, pi) (2) where Q denotes the formulated query. Linguistic Quality: In order to compute Linguistic quality, we use a language model that assigns probabilities to sequence of words to compute linguistic quality. Suppose a path contains a sequence of q words {w1, w2, ..., wq}. The score LQ(pi) assigned to each path is defined as follows: LQ(pi) = 1 1−LL(w1,w2,...,wq), (3) where LL(w1, w2, ..., wq) is defined as: LL(w1, . . . , wq) = 1 L · log2 Qq t=3 P(wt|wt−1wt−2). (4) As can be seen from Equation (4), we combine the conditional probability of different sets of 3-grams (trigrams) in the sentence and averaged the value by L – the number of conditional probabilities computed. The LL(w1, w2, . . . , wq) scores are negative; with higher magnitude implying lower importance. Therefore, in Equation (3), we take the reciprocal of the logarithmic value with smoothing to compute LQ(pi). In our experiments, we used a 3-gram model10 that is trained on the English Gigaword corpus. Trigram models have been successfully used in several text-to-text generation tasks (Clarke and Lapata, 2006; Filippova and Strube, 2008) earlier. ILP Formulation: To select the best paths, we combine all the above mentioned factors Iloc(pi), Iglob(pi) and linguistic quality LQ(pi) in an optimization framework. We maximize the following objective function: F(p1, . . . , pK) = PK i=1 1 T(pi) · Iloc(pi) · Iglob(pi) · LQ(pi) · pi (5) where K represents the total number of generated paths. Each pi represents a binary variable, that can be either 0 or 1, depending on whether the path is selected in the final summary or not. In addition, T(pi) – the number of tokens in a path, is included in the objective function. The term 1 T(pi) normalizes the Textrank scores by the length of the sentences. First, we ensure that a maximum of Smax sentences are selected in the summary using Equation (6). K X i=1 pi ≤Smax (6) In our experiments, we set Smax to 5 to generate short concise summaries in each section. Using a length constraint enables us to only populate the sections using the most informative content. We introduce Equation (7) to prevent similar information (cosine similarity ≥0.5) from being conveyed 10The model is available here: http://www.keithv. com/software/giga/. We used the VP 20K vocab version. 872 Category Most Frequent Sections American Mathematicians Awards, Awards and honors, Biography, Books, Career, Education, Life, Publications, Selected publications, Work Diseases and Disorders Causes, Diagnosis, Early life, Epidemiology, History, Pathophysiology, Prognosis, Signs and symptoms, Symptoms, Treatment US Software companies Awards, Criticism, Features, Games, History, Overview, Products, Reception, Services, Technology Table 1: Data characteristics of three domains on Wikipedia Category #Articles #Instances American Mathematicians ∼2100 1493 Diseases and Disorders ∼7000 9098 US Software companies ∼3600 2478 Table 2: Dataset used for classification by different sentences. This constraint reduces redundancy. If two sentences have a high degree of similarity, only one out of the two can be selected in the summary. ∀i, i′ ∈[1, K], i ̸= i′, pi + pi′ ≤1 if sim(pi, pi′) ≥0.5. (7) The ILP problem is solved using the Gurobi optimizer (2015). The solution to the problem decides the paths that should be included in the final summary. We populate the sections on Wikipedia using the final summaries generated for each section along with the section title. All the references that have been used to generate the sentences are appended along with the content generated on Wikipedia. 4 Experimental Results To evaluate the effectiveness of our proposed technique, we conduct several experiments. First, we evaluate our content generation approach by generating content for comprehensive articles that already exist on Wikipedia. Second, we analyze reviewer reactions on our system generated articles by adding content to several stubs on Wikipedia. Our experiments were designed to answer the following questions: (i)What are the optimal number of topic distribution features for each category? What are the classification accuracies in each domain? (ii)To what extent can our technique generate the content for articles automatically? (iii)What are the general reviewer reactions on Wikipedia and what percentage of automatically generated content on Wikipedia is retained? Dataset Construction: As mentioned earlier in Section 3.1, we crawl Wikipedia articles by traversing the category graph. Articles that contain at least three sections were included in the training set; other articles having lesser number of sections Figure 3: Performance of Classifier in the three categories based on the number of topics. are generally labeled as stubs and hence not used for training. Table 1 shows the most frequent sections in each category. Further, Table 2 shows the total number of articles retrieved from Wikipedia in each category. The total number of instances are also shown. The number of instances denotes the total number of the most frequent sections in each category. As can be seen from the table, the number of instances is higher than the number of articles only in case of the category on diseases. This implies that there are generally more common sections in the diseases category than the other categories. In each category, the content from only the most frequent sections were used to generate a topic model. The topic model is further used to infer topic distribution vectors from the training instances. We used the MALLET toolkit (McCallum, 2002) for generating topic distribution vectors and the WEKA package (Hall et al., 2009) for the classification tasks. Optimal number of topics: The LDA model requires a pre-defined number of topics. We experiment with several values of the number of topics ranging from 10 to 100. The topic distribution features of the content of the instances are used to train a Random Forest Classifier with the corresponding section titles as the class labels. As can be seen in the Figure 3, the classification performance varies across domains as well as on the number of topics. The optimal number of topics based on the dataset are marked in blue cir873 Category LDA-RF SVM-WV American Mathematicians 0.778 0.478 Diseases and Disorders 0.886 0.801 US Software companies 0.880 0.537 Table 3: Classification: Weighted F-Scores cles (40, 50 and 20 topics for Diseases, Software Companies in US and American mathematicians, respectively) in the Figure. We classify web excerpts using the best performing classifiers trained using the optimal number of topic features in each category. Classification performance: We use 10-fold cross validation to evaluate the accuracy of our classifier. According to the F-Scores, our classifier (LDA-RF) performs similarly in the categories on Diseases and US Software companies. However, the accuracy is lower in the American Mathematicians category. We also experimented with a baseline classifier, that is trained on TFIDF features (upto trigrams). A Support vector machine (Cortes and Vapnik, 1995) classifier obtained the best performance using the TFIDF features. The baseline system is referred to as SVM-WV. We experimented with several other combinations of classifiers; however, we show only the best performing systems using the LDA and TFIDF features. As can be seen from the Table 3, our classifier (LDARF) outperforms SVM-WV significantly in all the domains. SVM-WV performs better in the category on diseases than the other two categories and the performance is comparable to (LDA-RF). The diseases category has more uniformity in terms of the section titles, hence specific words or phrases characterize the sections well. In contrast, word distributions (LDA) work significantly better than TFIDF features in the other two categories. Error Analysis: We performed error analysis to understand the reason for misclassifications. As can be seen from the Table 1, all the categories have several overlapping sections. For example, Awards and honors and Awards contain similar content. Authors use various section names for similar content in the articles within the same category. We analyzed the confusion matrices, and found that multiple instances in Awards were classified into the class of Awards and honors. Similar observations are made on the Books and Publications classes – which are related sections in the context of academic biographies. In future, we plan to use semantic measures to relate similar classes automatically and group them in the same Category System ROUGE-1 ROUGE-2 WikiKreator 0.522 0.311 American Mathematicians Perceptron 0.431 0.193 Extractive 0.471 0.254 WikiKreator 0.537 0.323 Diseases and Disorders Perceptron 0.411 0.197 Extractive 0.473 0.232 WikiKreator 0.521 0.321 US Software companies Perceptron 0.421 0.228 Extractive 0.484 0.257 Table 4: ROUGE-1 and 2 Recall values – Comparing system generated articles to model articles class during classification. Content Selection Evaluation: To evaluate the effectiveness of our content generation process, we generated the content of 500 randomly selected articles that already exist on Wikipedia in each of the categories. We compare WikiKreator’s output against the current content of those articles on Wikipedia using ROUGE (Lin, 2004). ROUGE matches N-gram sequences that exist in both the system generated articles and the original Wikipedia articles (gold standard). We also compare WikiKreator’s output with an existing Wikipedia generation system [Perceptron] of Sauper and Barzilay (2009)11 that employs a perceptron learning framework to learn topic specific extractors. Queries devised using the conjunction of the document title and the section title were used to obtain excerpts from the web using a search engine, which were used in the perceptron model. In Perceptron, the most important sections in the category was determined using a bisectioning algorithm to identify clusters of similar sections. To understand the effectiveness of our abstractive summarizer, we design a system (Extractive) that uses an extractive summarization module. In Extractive, we use LexRank (Erkan and Radev, 2004) as the summarizer instead of our ILP based abstractive summarization model. We restrict the extractive summaries to 5 sentences for accurate comparison of both the systems. The same content was received as input from the classifier by the Extractive as well as our ILP-based system. As can be seen from the Table 4, the ROUGE scores obtained by WikiKreator is higher than that of the other comparable systems in all the categories. The higher ROUGE scores imply that WikiKreator is generally able to retrieve useful information from the web, synthesize them and present the important information in the article. 11The system is available here: https://github. com/csauper/wikipedia 874 Statistics Count Number of stubs edited 40 Number of stubs retained without any changes 21 Number of stubs that required minor editing 6 Number of stubs where edits were modified by reviewers 4 Number of stubs in which content was removed 9 Average change in size of stubs 515 bytes Average number of edits made post content-addition ∼3 Table 5: Statistics of Wikipedia generation However, it may also be noted that the Extractive system outperforms the Perceptron framework. Summarization from multiple sources generates more informative summaries and is more effective than ‘selection’ of the most informative excerpt, which is often inadequate due to potential loss of information. WikiKreator performs better than the extractive system on all the categories. Our ILPbased abstractive summarization system fuses and selects content from multiple sentences, thereby aggregating information successfully from multiple sources. In contrast, LexRank ‘extracts’ the top 5 sentences that results in some information loss. Analysis of Wikipedia Reviews: To compare our method with the other techniques, it is necessary to generate content and append to Wikipedia stubs using all the techniques. However, recent work on article generation (Banerjee et al., 2014) has already shown that content directly copied from web sources cannot be used on Wikipedia. Further, bots using copyrighted content might be banned and real-users would have to read sub-standard articles due to the internal tests we perform. Due to the above mentioned reasons, we appended content generated only using our abstractive summarization technique. We published content generated by WikiKreator on Wikipedia and appended the content to 40 randomly selected stubs. As can be seen from the Table 5, the content generated using our system was generally accepted by the reviewers. Half of the articles did not require any further changes; while in 6 cases (15%) the reviewers asked us to fix grammatical issues. In 9 stubs, the reliability of the cited references was questioned. Information sources on Wikipedia need to satisfy a minimum reliability standard, which our algorithm currently cannot determine. On an average, 3 edits were made to the Wikipedia articles that we generated. In general, there is an average increase in the content size of the stubs that we edited showing that our method is capable of producing content that generally satisfy Wikipedia criterion. Analysis of section assignment: We manually inspected generated content of 20 articles in each category. Generated summaries are both informative and precise. However, in certain cases, the generated section title is not the same as the section title in the original Wikipedia article. For example, we generated content for the section “Causes” for the article on Middle East Respiratory Syndrome (MERS)12: Milk or meat may play a role in the transmission of the virus . People should avoid drinking raw camel milk or meat that has not been properly cooked . There is growing evidence that contact with live camels or meat is causing MERS. The corresponding content on the Wikipedia is in a section labeled as “Transmission”. Section titles at the topmost level in a category might not be relevant to all the articles. Instead of using a topdown approach of traversing the category-graph, we can also use a bottom-up approach where we learn from all the categories that an article belongs to. For example, the article on MERS belongs to two categories: Viral respiratory tract infection and Zoonoses. Training using all the categories will allow context-driven section identification. Most frequent sections at a higher level in the category graph might not always be relevant to all the articles within a category. 5 Conclusions and Future Work In this work, we presented WikiKreator that can generate content automatically to improve Wikipedia stubs. Our technique employes a topicmodel based text classifier that assigns web excerpts into various sections on an article. The excerpts are summarized using a novel abstractive summarization technique that maximizes informativeness and linguistic quality of the generated summary. Our experiments reveal that WikiKreator is capable of generating well-formed informative content. The summarization step ensures that we avoid any copyright violation issues. The ILP based sentence generation strategy ensures that we generate novel content by synthesizing information from multiple sources and thereby improve content selection. In future, we plan to cluster related sections using semantic relatedness measures. We also plan to estimate reliabilities of sources to retrieve information only from reliable sources. 12https://en.wikipedia.org/wiki/Middle_ East_respiratory_syndrome 875 References Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 1999. Modern information retrieval, volume 463. ACM press New York. Siddhartha Banerjee, Cornelia Caragea, and Prasenjit Mitra. 2014. Playscript classification and automatic wikipedia play articles generation. In Pattern Recognition (ICPR), 2014 22nd International Conference on, pages 3630–3635, Aug. David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of machine Learning research, 3:993–1022. Mohamed Amine Chatti, Darko Dugoija, Hendrik Thus, and Ulrik Schroeder. 2014. Learner modeling in academic networks. In Advanced Learning Technologies (ICALT), 2014 IEEE 14th International Conference on, pages 117–121. IEEE. James Clarke and Mirella Lapata. 2006. Constraintbased sentence compression an integer programming approach. In Proceedings of the COLING/ACL on Main conference poster sessions, pages 144–151. Association for Computational Linguistics. Corinna Cortes and Vladimir Vapnik. 1995. Supportvector networks. Machine learning, 20(3):273–297. Vipul Dalal and Latesh Malik. 2013. A survey of extractive and abstractive text summarization techniques. In Emerging Trends in Engineering and Technology (ICETET), 2013 6th International Conference on, pages 109–110. IEEE. G¨unes Erkan and Dragomir R Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text summarization. J. Artif. Intell. Res.(JAIR), 22(1):457–479. Katja Filippova and Michael Strube. 2008. Sentence fusion via dependency graph compression. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 177–185. Association for Computational Linguistics. Katja Filippova. 2010. Multi-sentence compression: Finding shortest paths in word graphs. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 322–330. Association for Computational Linguistics. Evgeniy Gabrilovich and Shaul Markovitch. 2007. Computing semantic relatedness using wikipediabased explicit semantic analysis. In IJCAI, volume 7, pages 1606–1611. Juri Ganitkevitch, Chris Callison-Burch, Courtney Napoles, and Benjamin Van Durme. 2011. Learning sentential paraphrases from bilingual parallel corpora for text-to-text generation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1168–1179. Association for Computational Linguistics. Shima Gerani, Yashar Mehdad, Giuseppe Carenini, T. Raymond Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1602–1613. Association for Computational Linguistics. Inc. Gurobi Optimization. 2015. Gurobi optimizer reference manual. Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H Witten. 2009. The weka data mining software: an update. ACM SIGKDD explorations newsletter, 11(1):10– 18. Christian Kohlsch¨utter, Peter Fankhauser, and Wolfgang Nejdl. 2010. Boilerplate detection using shallow text features. In Proceedings of the third ACM international conference on Web search and data mining, pages 441–450. ACM. Peng Li, Yinglin Wang, and Jing Jiang. 2013. Automatically building templates for entity summary construction. Information Processing & Management, 49(1):330–340. Andy Liaw and Matthew Wiener. 2002. Classification and regression by randomforest. R news, 2(3):18– 22. Chin-Yew Lin. 2004. Rouge: A package for automatic evaluation of summaries. In Text Summarization Branches Out: Proceedings of the ACL-04 Workshop, pages 74–81. Andrew K McCallum. 2002. {MALLET: A Machine Learning for Language Toolkit}. Olena Medelyan, Ian H Witten, and David Milne. 2008. Topic indexing with wikipedia. In Proceedings of the AAAI WikiAI workshop, pages 19–24. Rada Mihalcea and Paul Tarau. 2004. Textrank: Bringing order into texts. Association for Computational Linguistics. Ani Nenkova, Sameer Maskey, and Yang Liu. 2011. Automatic summarization. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Tutorial Abstracts of ACL 2011, page 3. Association for Computational Linguistics. Christina Sauper and Regina Barzilay. 2009. Automatically generating wikipedia articles: A structureaware approach. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1-Volume 1, pages 208–216. Association for Computational Linguistics. 876 Fei Song and W Bruce Croft. 1999. A general language model for information retrieval. In Proceedings of the eighth international conference on Information and knowledge management, pages 316– 321. ACM. Wei Song, Yu Zhang, Ting Liu, and Sheng Li. 2010. Bridging topic modeling and personalized search. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1167– 1175. Association for Computational Linguistics. Marilyn A Walker, Owen Rambow, and Monica Rogati. 2001. Spot: A trainable sentence planner. In Proceedings of the second meeting of the North American Chapter of the Association for Computational Linguistics on Language technologies, pages 1–8. Association for Computational Linguistics. Conglei Yao, Xu Jia, Sicong Shou, Shicong Feng, Feng Zhou, and HongYan Liu. 2011. Autopedia: automatic domain-independent wikipedia article generation. In Proceedings of the 20th international conference companion on World wide web, pages 161– 162. ACM. Torsten Zesch and Iryna Gurevych. 2007. Analysis of the wikipedia category graph for nlp applications. In Proceedings of the TextGraphs-2 Workshop (NAACL-HLT 2007), pages 1–8. 877
2015
84
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 878–888, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Language to Code: Learning Semantic Parsers for If-This-Then-That Recipes Chris Quirk Microsoft Research Redmond, WA, USA [email protected] Raymond Mooney∗ UT Austin Austin TX, USA [email protected] Michel Galley Microsoft Research Redmond, WA, USA [email protected] Abstract Using natural language to write programs is a touchstone problem for computational linguistics. We present an approach that learns to map natural-language descriptions of simple “if-then” rules to executable code. By training and testing on a large corpus of naturally-occurring programs (called “recipes”) and their natural language descriptions, we demonstrate the ability to effectively map language to code. We compare a number of semantic parsing approaches on the highly noisy training data collected from ordinary users, and find that loosely synchronous systems perform best. 1 Introduction The ability to program computers using natural language would clearly allow novice users to more effectively utilize modern information technology. Work in semantic parsing has explored mapping natural language to some formal domain-specific programming languages such as database queries (Woods, 1977; Zelle and Mooney, 1996; Berant et al., 2013), commands to robots (Kate et al., 2005), operating systems (Branavan et al., 2009), smartphones (Le et al., 2013), and spreadsheets (Gulwani and Marron, 2014). Developing such languageto-code translators has generally required specific dedicated efforts to manually construct parsers or large corpora of suitable training examples. An interesting subset of the possible program space is if-then “recipes,” simple rules that allow users to control many aspects of their digital life including smart devices. Automatically parsing ∗Work performed while visiting Microsoft Research. these recipes represents a step toward complex natural language programming, moving beyond single commands toward compositional statements with control flow. Several services, such as Tasker and IFTTT, allow users to create simple programs with “triggers” and “actions.” For example, one can program their Phillips Hue light bulbs to flash red and blue when the Cubs hit a home run. A somewhat complicated GUI allows users to construct these recipes based on a set of information “channels.” These channels represent many types of information. Weather, news, and financial services have provided constant updates through web services. Home automation sensors and controllers such as motion detectors, thermostats, location sensors, garage door openers, etc. are also available. Users can then describe the recipes they have constructed in natural language and publish them. Our goal is to build semantic parsers that allow users to describe recipes in natural language and have them automatically mapped to executable code. We have collected 114,408 recipedescription pairs from the http://ifttt.com website. Because users often provided short or incomplete English descriptions, the resulting data is extremely noisy for the task of training a semantic parser. Therefore, we have constructed semantic-parser learners that utilize and adapt ideas from several previous approaches (Kate and Mooney, 2006; Wong and Mooney, 2006) to learn an effective interpreter from such noisy training data. We present results on our collected IFTTT corpus demonstrating that our best approach produces more accurate programs than several competing baselines. By exploiting such “found data” on the web, semantic parsers for natural-language programming can potentially be developed with minimal effort. 878 2 Background We take an approach to semantic parsing that directly exploits the formal grammar of the target meaning representation language, in our case IFTTT recipes. Given supervised training data in the form of natural-language sentences each paired with their corresponding IFTTT recipe, we learn to introduce productions from the formal-language grammar into the derivation of the target program based on expressions in the natural-language input. This approach originated with the SILT system (Kate et al., 2005) and was further developed in the WASP (Wong and Mooney, 2006; Wong and Mooney, 2007b) and KRISP (Kate and Mooney, 2006) systems. WASP casts semantic parsing as a syntax-based statistical machine translation (SMT) task, where a synchronous context-free grammar (SCFG) (Wu, 1997; Chiang, 2005; Galley et al., 2006) is used to model the translation of natural language into a formal meaning representation. It uses statistical models developed for syntax-based SMT for lexical learning and parse disambiguation. Productions in the formal-language grammar are used to construct synchronous rules that simultaneously model the generation of the natural language. WASP was subsequently “inverted” to use the same synchronous grammar to generate natural language from the formal language (Wong and Mooney, 2007a). KRISP uses classifiers trained using a SupportVector Machine (SVM) to introduce productions in the derivation of the formal translation. The productions of the formal-language grammar are treated like semantic concepts to be recognized from natural-language expressions. For each production, an SVM classifier is trained using a string subsequence kernel (Lodhi et al., 2002). Each classifier can then estimate the probability that a given natural-language substring introduces a production into the derivation of the target representation. During semantic parsing, these classifiers are employed to estimate probabilities on different substrings of the sentence to compositionally build the most probable meaning representation for the sentence. Unlike WASP whose synchronous grammar needs to be able to directly parse the input, KRISP’s approach to “soft matching” productions allows it to produce a parse for any input sentence. Consequently, KRISP was shown to be much more robust to noisy training data than previous approaches to semantic parsing (Kate and Mooney, 2006). Since our “found data” for IFTTT is extremely noisy, we have taken an approach similar to KRISP; however, we use a probabilistic log-linear text classifier rather than an SVM to recognize productions. This method of assembling well-formed programs guided by a natural language query bears some resemblance to Keyword Programming (Little and Miller, 2007). In that approach, users enter natural language queries in the middle of an existing program; this query drives a search for programs that are relevant to the query and fit within the surrounding program. However, the function used to score derivations is a simple matching heuristic relying on the overlap between query terms and program identifiers. Our approach uses machine learning to build a correspondence between queries and recipes based on parallel data. There is also a large body of work applying Combinatory Categorical Grammars to semantic parsing, starting with Zettlemoyer and Collins (2005). Depending on the set of combinators used, this approach can capture more expressive languages than synchronous context-free MT. In practice, however, synchronous MT systems have competitive accuracy scores (Andreas et al., 2013). Therefore, we have not yet evaluated CCG on this task. 3 If-this-then-that recipes The recipes considered in this paper are diverse and powerful despite being simple in structure. Each recipe always contains exactly one trigger and one action. Whenever the conditions of the trigger are satisfied, the action is performed. The resulting recipes can perform tasks such as home automation (“turn on my lights when I arrive home”), home security (“text me if the door opens”), organization (“add receipt emails to a spreadsheet”), and much more (“remind me to drink water if I’ve been at a bar for more than two hours”). Triggers and actions are drawn from a wide range of channels that must be activated by each user. These channels can represent many entities and services, including devices (such as Android devices or WeMo light switches) and knowledge sources (such as ESPN or Gmail). Each channel exposes a set of functions for both trigger and action. Several services such as IFTTT, Tasker, and Llama allow users to author if-this-then-that recipes. IFTTT is unique in that it hosts a large set of recipes along with descriptions and other metadata. Users of this site construct recipes using a GUI interface to select the trigger, action, and the 879 parameters for both trigger and action. After the recipe is authored, the user must provide a description and optional set of notes for this recipe and publish the recipe. Other users can browse and use these published recipes; if a user particularly likes a recipe, they can mark it as a favorite. As of January 2015, we found 114,408 recipes on http://ifttt.com. Among the available recipes we encountered a total of 160 channels. In total, we found 552 trigger functions from 128 of those channels, and 229 action functions from 99 channels, for a total of 781 functions. Each recipe includes a number of pieces of information: description1, note, author, number of uses, etc. 99.98% of the entries have a description, and 35% contain a note. Based on availability, we focused primarily on the description, though there are cases where the note is a more explicit representation of program intent. The recipes at http://ifttt.com are represented as HTML forms, with combo boxes, inline maps, and other HTML UI components allowing end users to select functions and their parameters. This is convenient for end users, but difficult for automated approaches. We constructed a formal grammar of possible program structures, and from each HTML form we extracted an abstract syntax tree (AST) conforming to this grammar. We model this as a context-free grammar, though this assumption is violated in some cases. Consider the program in Figure 1, where some of the parameters used the action are provided by the trigger. This data could be used in a variety of ways. Recipes could be suggested to users based on their activities or interests, for instance, or one could train a natural language generation system to give a readable description of code. In this paper, the paired natural language descriptions and abstract syntax trees serve as training data for semantic parsing. Given a description, a system must produce the AST for an IFTTT recipe. We note in passing that the data was constructed in the opposite direction: users first implemented the recipe and then provided a description afterwards. Ideal data for our application would instead start with the description and construct the recipe based on this description. Yet the data is unusually large and diverse, making it interesting training data for mapping natural language to code. 1The IFTTT site refers to this as “title”. 4 Program synthesis methods We consider a number of methods to map the natural language description of a problem into its formal program representation. 4.1 Program retrieval One natural baseline is retrieval. Multiple users could potentially have similar needs and therefore author similar or even identical programs. Given a novel description, we can search for the closest description in a table of program-description pairs, and return the associated program. We explored several text-similarity metrics, and found that string edit distance over the unmodified character sequence achieved best performance on the development set. As the corpus of program-description pairs becomes larger, this baseline should increase in quality and coverage. 4.2 Machine Translation The downside to retrieval is that it cannot generalize. Phrase-based SMT systems(Och et al., 1999; Koehn et al., 2003) can be seen as an incremental step beyond retrieval: they segment the training data and attempt to match and assemble those segments at runtime. If the phrase length is unbounded, retrieval is almost a special case: it could return whole programs from the training data when the description matches exactly. In addition, they can find subprograms that are relevant to portions of the input, and assemble those subprograms into whole programs. As a baseline, we adopt a recent approach (Andreas et al., 2013) that casts semantic parsing as phrasal translation. First, the ASTs are converted into flat sequences of code tokens using a pre-order left-to-right traversal. The tokens are annotated with their arity, which is sufficient to reconstruct the tree given a well formed sequence of tokens using a simple stack algorithm. Given this parallel corpus of language and code tokens, we train a conventional statistical machine translation system that is similar in structure and performance to Moses (Koehn et al., 2007). We gather the k-best translations, retaining the first such output that can be successfully converted into a well-formed program according to the formal grammar. Integration of the well-formedness constraint into decoding would likely produce better translations, but would require more modifications to the MT system. Approaches to semantic parsing inspired by machine translation have proven effective when the 880 (A) CHANNELS (B) FUNCTIONS (C) PARAMETERS IF ACTION Google Drive Add row to spreadsheet Drivefolder path IFTTT Android Formatted row {{OccurredAt}} {{FromNumber}} {{ContactName}} Spreadsheet name missed TRIGGER Android Phone Call Any phone call missed Archive your missed calls from Android to Google Drive Figure 1: Example recipe with description, with nodes corresponding to (a) Channels, (b) Functions, and (c) Parameters indicated with specific boxes. Note how some of the fields in braces, such as OccurredAt, depend on the trigger. data is very parallel. In the IFTTT dataset, however, the available pairs are not particularly clean. Word alignment quality suffers, and production extraction suffers in turn. Descriptions in this corpus are often quite telegraphic (e.g., “Instagram to Facebook”) or express unnecessary pieces of information, or are downright unintelligible (“ 2Mrl14”). Approaches that rely heavily on lexicalized information and assume a one-to-one correspondence between source and target (at the phrase, if not the word level) struggle in this setting. 4.3 Generation without alignment An alternate approach is to treat the source language as context and a general direction, rather than a hard constraint. The target derivation can be produced primarily according to the formal grammar while guided by features from the source language. For each production in the formal grammar, we can train a binary classifier intended to predict whether that production should be present in the derivation. This classifier uses general features of the source sentence. Note how this allows productions to be inferred based on context: although a description might never explicitly say that a production is necessary, the surrounding context might strongly imply it. We assign probabilities to derivations by looking at each production independently. A derivation either uses or does not use each production. For each production used in the derivation, we multiply by the probability of its inclusion. Likewise for each production not used in the derivation, we multiply by one minus the probability of its inclusion. Let G = (V, Σ, R, S) be the formal grammar with non-terminals V , terminal vocabulary Σ, productions R and start symbol S. E represents a source sentence, and D, a formal derivation tree for that sentence. R(D) is the set of productions in that derivation. The score of a derivation is the following product: P(D|E) = Y r∈R(D) P(r|E) Y r∈R\R(D) P(¬r|E) The binary classifiers are log-linear models over features, F, of the input string: P(r|E) ∝ exp θ⊤ r F(E)  . 4.3.1 Training For each production, we train a binary classifier predicting its presence or absence. Given a training set of parallel descriptions and programs, we create |R| binary classifier training sets, one for each classifier. We currently use a small set of simple features: word unigrams and bigrams, and character trigrams. 4.3.2 Inference When presented with a novel utterance, E, our system must find the best code corresponding to that utterance. We use a top-down, left-to-right generation strategy, where each search node contains a stack of symbols yet to be expanded and a log probability. The initial node is ⟨[S] , 0⟩; and a node is complete when its stack of non-terminals is empty. Given a search node with a non-terminal as its first symbol on the stack, we expand with any production for that symbol, putting its yield onto the stack and updating the node cost to include its 881 derivation score: ⟨[X, α] , p⟩(X →β) ∈R ⟨[β, α] , p + log P(X →β|E)⟩⟩ If the first stack item is a terminal, it is scanned: ⟨[a, α] , p⟩a ∈Σ ⟨[α] , p⟩ Using these inference rules, we utilize a simple greedy approach that only accounts for the productions included in the derivation. To account for the negative Q r∈R\R(D) P(¬r|E) factors, we use a beam search, and rerank the n-best final outcomes from this search based on the probability of all productions that are not included. Partial derivations are grouped into beams according to the number of productions in that derivation. 4.4 Loosely synchronous generation The above method learns distributions over productions given the input, but treats the sentence as an undifferentiated bag of linguistic features. The syntax of the source sentence is not leveraged at all, nor is any correspondence between the language syntax and the program structure used. Often the pairs are not in sufficient correspondence to suggest synchronous approaches, but some loose correspondence to maintain at least a notion of coverage could be helpful. We pursue an approach similar to KRISP (Kate and Mooney, 2006), with several differences. First, rather than a string kernel SVM, we use a log-linear model with character and word n-gram features. 2 Second, we allow the model to consider both spaninternal features and contextual features. This approach explicitly models the correspondence between nodes in the code side and tokens in the language. Unlike standard MT systems, word alignment is not used as a hard constraint. Instead, this phrasal correspondence is induced as part of model training. We define a semantic derivation D of a natural language sentence E as a program AST where each production in the AST is augmented with a span. The substrings covered by the children of a production must not overlap, and the substring covered by the parent must be the concatenation of the substrings covered by the children. Figure 2 shows a sample semantic derivation. 2We have a preference for log-linear models given their robustness to hyperparameter settings, ease of optimization, and flexible incorporation of features. An SVM trained with similar features should have similar performance, though. IF[1-6] ACTION[1-2] Phone call[1-2] Call my phone[1-2] TRIGGER[3-6] ESPN[3-6] New in-game update[3-6] Chicago Cubs[5-5] 1 2 3 4 5 6 Call me if the Cubs score Figure 2: An example training pair with its semantic derivation. Note the correspondence between formal language and natural language denoted with indices and spans. The core components of KRISP are string-kernel classifiers P(r, i..j|E) denoting the probability that a production r in the AST covers the span of words i..j in the sentence E. Here, i < j are positions in the sentence indicating the span of tokens most relevant to this production. In other words, the substring E[i..j] denotes the production r with probability P(r, i..j|E). The probability of a semantic derivation D is defined as follows: P(D|E) = Y (r,i..j)∈D P(r, i..j|E) That is, we assume that each production is independent of all others, and is conditioned only on the string to which it is aligned. This can be seen as a refinement of the above production classification approach using a notion of correspondence. Rather than using string kernels, we use logistic regression classifiers with word unigram, word bigram, and character trigram features. Unlike KRISP, we include features from both inside and outside the substring. Consider the production “Phone call →Call my phone” with span 1-2 from Figure 2. Word unigram features indicate that “call” and “me” are inside the span; the remaining words are outside the span. Word bigram features indicate that “call me” is inside the span, “me if” is on the boundary of the span, and all remaining bigrams are outside the span. 4.4.1 Training These classifiers are trained in an iterative EMlike manner (Kate and Mooney, 2006). Starting with some initial classifiers and a training set of NL and AST pairs, we search for the most likely derivation. If the AST underlying this derivation matches the gold AST, then this derivation is added 882 to the set of positive instances. Otherwise, it is added to the set of negative instances, and the best derivation constrained to match the gold standard AST is found and added to the positive instances. Given this revised training data, the classifiers are retrained. After each pass through the training data, we evaluate the current model on the development set. This procedure is repeated until developmentset performance begins to fall. 4.4.2 Inference To find the most probable derivation according to the grammar, KRISP uses a variation on Earley parsing. This is similar to the inference method from Section 4.3.2, but each item now additionally maintains a position and a span. Inference proceeds left-to-right through the source string. The natural language may present information in a different order than the formal language, so all permutations of rules are considered during inference. We found this inference procedure to be quite slow for larger data sets, especially because wide beams were needed to prevent search failure. To speed up inference, we used scores from the position-independent classifiers as completion-cost estimates. The completion-cost estimate for a given symbol is defined recursively. Terminals have a cost of zero. Productions have a completion cost of the log probability of the production given the sentence, plus the completion cost of all non-terminal symbols. The completion cost for a non-terminal is the max cost of any production rooted in that nonterminal. Computing this cost requires traversing all productions in the grammar for each sentence. Given a partial hypothesis, we use exact scores for the left-corner subtree that has been fully constructed, and completion estimates for all the symbols and productions whose left and right spans are not yet fully instantiated. 5 Experimental Evaluation Next we evaluate the accuracy of these approaches. The 114,408 recipes described in Section 3 were first cleaned and tokenized. We kept only one recipe per unique description, after mapping to lowercase and normalizing punctuation.3 Finally the recipes were split by author, randomly assigning each to training, development, or test, to prevent 3We found many recipes with the same description, likely copies of some initial recipe made by different users. We selected one representative using a deterministic heuristic. Language Code Recipes 77,495 77,495 Train Tokens 527,368 1,776,010 Vocabulary 58,102 140,871 Recipes 5,171 5,171 Dev Tokens 37,541 110,074 Vocabulary 7,741 14,804 Recipes 4,294 4,294 Test Tokens 28,214 94,367 Vocabulary 6,782 13,969 Table 1: Statistics of the data after cleaning and separating into training, development, and test sets. In each case, the number of recipes, tokens (including punctuation, etc.) and vocabulary size are included. overfitting to the linguistic style of a particular author. Table 1 presents summary statistics for the resulting data. Although certain trigger-action pairs occur much more often than others, the recipes in this data are quite diverse. The top 10 trigger-action pairs account for 14% of the recipes; the top 100 account for 37%; the top 1000 account for 72%. 5.1 Metrics To evaluate system performance, several different measures are employed. Ideally a system would output exactly the correct abstract syntax tree. One measure is to count the number of exact matches, though almost all methods receive a score of 0.4 Alternatively, we can look at the AST as a set of productions, computing balanced F-measure. This is a much more forgiving measure, giving partial credit for partially correct results, though it has the caveat that all errors are counted equally. Correctly assigning the trigger and action is the most important, especially because some of the parameter values are tailored for particular users. For example, “turn off my lights when I leave home” requires a “home” location, which varies for each user. Therefore, we also measure accuracy at identifying the correct trigger and action, both at the channel and function level. 5.2 Human comparison One remaining difficulty is that multiple programs may be equally correct. Some descriptions are very difficult to interpret, even for humans. Second, 4Retrieval gets an exact match 3.7% of the time, likely due to near-duplicates from copied recipes. 883 multiple channels may provide similar functionality: both Phillips Hue and WeMo channels provide the ability to turn on lights. Even a well-authored description may not clarify which channel should be used. Finally, many descriptions are underspecified. For instance, the description “notify me if it rains” does not specify whether the user should receive an Android notification, an iOS notification, an email, or an SMS. This is difficult to capture with an automatic metric. To address the prevalence and impact of underspecification and ambiguity in descriptions, we asked humans to perform a very similar task. Human annotators on Amazon Mechanical Turk (“turkers”) were presented with recipe descriptions and asked to identify the correct channel and function (but not parameters). Turkers received careful instructions and several sample description-recipe pairs, then were asked to specify the best recipe for each input. We requested they try their best to find an action and a trigger even when presented with vague or ambiguous descriptions, but they could tag inputs as ‘unintelligible’ if they were unable to make an educated guess. Turkers created recipes only for English descriptions, applying the label ’non-English’ otherwise. Five recipes were gathered for each description. The resulting recipes are not exactly gold, as they have limited training at the task. However, we imposed stringent qualification requirements to control the annotation quality.5 Our workers were in fair agreement with one another and the gold standard, producing high quality annotation at wages calibrated to local minimum wage. We measure turker agreement with Krippendorff’s α (Krippendorff, 1980), which is a statistical measure of agreement between any number of coders. Unlike Cohen’s κ (Cohen, 1960), the α statistic does not require that coders be the same for each unit of analysis. This property is particularly desirable in our case, since turkers generally differ across HITs. A value of α = 1 indicates perfect agreement, while α ≤0 suggests the absence of agreement or systematic disagreement. Agreement measures on the Mechanical Turk data are shown in Table 2. This shows encouraging levels of agreement for both the trigger and the action, especially considering the large number of categories. Krippendorff (1980) advocates a 0.67 cutoff to allow 5Turkers must have 95% HIT approval rating and be native speakers of English (As an approximation of the latter, we required Turkers be from the U.S.). Manual inspection of annotation on a control set drawn from the training data ensured there was no apparent spam. Trigger Action C C+F C C+F # of categories 128 552 99 229 all .592 .492 .596 .532 Intelligible English .687 .528 .731 .627 Table 2: Annotator agreement as measured by Krippendorff’s α coefficient (Krippendorff, 1980). Agreement is measured on either channel (C) or channel and function (C+F), and on either the full test set (4294 recipes) or its English and intelligible subset (2262 recipes). “tentative conclusion” of agreement, and turkers are relatively close to that level for both trigger and action channels. However, it is important to note that the coding scheme used by turkers is not mutually exclusive, as several triggers and actions (e.g., “SMS” vs. “Android SMS” actions) accomplish similar effects. Thus, our levels of agreement are likely to be greater than suggested by measures in the table. Finally, we also measured agreement on the English and intelligible subset of the data, as we found that confusion between the two labels “nonEnglish” and “unintelligible” was relatively high. As shown in the table, this substantially increased levels of agreement, up to the point where α for both trigger and action channels are above the 0.67 cutoff drawing tentative conclusion of agreement. 5.3 Systems and baselines The retrieval method searches for the closest description in the training data based on character string-edit-distance and returns the recipe for that training program. The phrasal method uses phrasebased machine translation to generate candidate outputs, searching the resulting n-best candidates for the first well-formed recipe. After exploring multiple word alignment approaches, we found that an unsupervised feature-rich method (BergKirkpatrick et al., 2010) worked best, leveraging features of string similarity between the description and the code. We ran MERT on the development data to tune parameters. We used a phrasal decoder with performance similar to Moses. The synchronous grammar method, a recreation of WASP, uses the same word alignment as above, but extracts a synchronous grammar rules from the parallel data (Wong and Mooney, 2006). The classifier approach described in Section 4.3 is independent of word alignment. Finally, the posclass approach from Section 4.4 derives its own deriva884 tion structure from the data. The human annotations are used to establish the mturk human-performance baseline by taking the majority selection of the trigger and action over 5 HITs for each description and comparing the result to the gold standard. The oracleturk humanperformance baseline shows how often at least one of the turkers agreed with the gold standard. In addition, we evaluated all systems on a subset of the test data where at least three humangenerated recipes agreed with the gold standard. This subset represents those programs that are easily reproducible by human workers. A good method should strive to achieve 100% accuracy on this set, and we should perhaps not be overly concerned about the remaining examples where humans disagree about the correct interpretation. 5.4 Results and discussion Table 3 summarizes the main evaluation results. Most of the measures are in concordance. Interestingly, retrieval outperforms the phrasal MT baseline. With a sufficiently long phrase limit, phrasal MT approaches retrieval, but with a few crucial differences. First, phrasal requires an exact match of some substring of the input to some substring of the training data, where retrieval can skip over words. Second, the phrases are heavily dependent on word alignment; we find the word alignment techniques struggle with the noisy IFTTT descriptions. Sync performs similarly to phrasal. The underspecified descriptions challenge assumptions in synchronous grammars: much of the target structure is implied rather than stated. In contrast, the classification method performs quite well. Some productions may be very likely given a prior alone, or may be inferred given other productions and the need for a well-formed derivation. Augmenting this information with positional information as in posclass can help with the attribution problem. Consider the input “Download Facebook Photos you’re tagged in to Dropbox”: we would like the token “Facebook” to invoke only the trigger, not the action. We believe further gains could come from better modeling of the correspondence between derivation and natural language. We find that semantic parsing systems have accuracy nearly as high or even higher than turkers in certain conditions. There are several reasons for this. First, many of the channels overlap in functionality (Gmail vs. email, or Android SMS vs. SMS); likewise functions may be very closely reChannel +Func Prod F1 (a) All: 4,294 recipes retrieval 28.2 19.3 40.8 phrasal 17.3 10.0 34.8 sync 16.2 9.5 34.9 classifier 46.3 33.0 47.3 posclass 47.4 34.5 48.0 mturk 33.4 22.6 –n/a– oracleturk 48.8 37.8 –n/a– (b) Omit non-English: 3,741 recipes retrieval 28.9 20.2 41.7 phrasal 19.3 11.3 35.3 sync 18.1 10.6 35.1 classifier 48.8 35.2 48.4 posclass 50.0 36.9 49.3 mturk 38.4 26.0 –n/a– oracleturk 56.0 43.5 –n/a– (c) Omit non-English & unintelligible: 2,262 recipes retrieval 36.8 25.4 49.0 phrasal 27.8 16.4 39.9 sync 26.7 15.5 37.6 classifier 64.8 47.2 56.5 posclass 67.2 50.4 57.7 mturk 59.0 41.5 –n/a– oracleturk 86.2 59.4 –n/a– (d) ≥3 turkers agree with gold: 758 recipes retrieval 43.3 32.3 56.2 phrasal 37.2 23.5 45.5 sync 36.5 24.1 42.8 classifier 79.3 66.2 65.0 posclass 81.4 71.0 66.5 mturk 100.0 100.0 –n/a– oracleturk 100.0 100.0 –n/a– Table 3: Evaluation results. The first column measures how often the channels are selected correctly for both trigger and action (e.g. Android Phone Call and Google Drive in Figure 1). The next column measures how often both the channel and function are correctly selected for both trigger and action (e.g. Android Phone Call::Any phone call missed and Google Drive::Add row to spreadsheet). The last column shows balanced F-measure against the gold tree over all productions in the proposed derivation, from the root production down to the lowest parameter. We show results on (a) the full test data; (b) omitting descriptions marked as non-English by a majority of the crowdsourced workers; (c) omitting descriptions marked as either non-English or unintelligible by the crowd; and (d) only recipes where at least three of five workers agreed with the gold standard. lated (Post a tweet vs. Post a tweet with an image). All the systems with access to thousands of training pairs are at a strong advantage; they can, for 885 INPUT Park in garage when snow tomorrow (a) IFTTT Weather : Tomorrow’s forecast calls for =⇒SMS : Send me an SMS OUTPUT Weather : Tomorrow’s forecast calls for =⇒SMS : Send me an SMS INPUT Suas fotos do instagr.am salvas no dropbox (b) IFTTT Instagram : Any new photo by you =⇒Dropbox : Add file from URL OUTPUT Instagram : Any new photo by you =⇒Dropbox : Add file from URL INPUT Foursquare check-in archive (c) IFTTT Foursquare : Any new check-in =⇒Evernote : Create a note OUTPUT Foursquare : Any new check-in =⇒Google Drive : Add row to spreadsheet INPUT if i post something on blogger it will post it to wordpress (d) IFTTT Blogger : Any new post =⇒WordPress : Create a post OUTPUT Feed : New feed item =⇒Blogger : Create a post INPUT Endless loop! (e) IFTTT Gmail : New email in inbox from =⇒Gmail : Send an email OUTPUT SMS : Send IFTTT any SMS =⇒Philips hue : Turn on color loop Table 4: Example output from the posclass system. For each input instance, we show the original query, the recipe originally authored through IFTTT, and our system output. Instance (a) demonstrates a case where the correct program is produced even though the input is rather tricky. Even the Portuguese query of (b) is correctly predicted, though keywords help here. In instance (c), the query is underspecified, and the system predicts that archiving should be done in Google Drive rather than evernote. Instance (d) shows how we sometimes confuse the trigger and action. Certain queries, such as (e), would require very deep inference: the IFTTT recipe sets up an endless email loop, where our system assembles a strange interpretation based on keyword match. instance, more effectively break such ties by learning a prior over which channels are more likely. Turkers, on the other hand, have neither specific training at this job nor a background corpus and more frequently disagree with the gold standard. Second, there are a number of non-English and unintelligible descriptions. Although the turkers were asked to skip these sentences, the machinelearning systems may still correctly predict the channel and action, since the training set also contains non-English and cryptic descriptions. For the cases where humans agree with each other and with the gold standard, the best automated system (posclass) does fairly well, getting 81% channel and 71% function accuracy. Table 4 has some sample outputs from the posclass system, showing both examples where the system is effective and where it struggles to find the intended interpretation. 6 Conclusions The primary goal of this paper is to highlight a new application and dataset for semantic parsing. Although if-this-then-that recipes have a limited structure, many potential recipes are possible. This is a small step toward broad program synthesis from natural language, but is driven by real user data for modern hi-tech applications. To encourage further exploration, we are releasing the URLs of recipes along with turker annotations at http://research.microsoft.com/lang2code/. The best performing results came from a loosely synchronous approach. We believe this is a very promising direction: most work inspired by parsing or machine translation has assumed a strong connection between the description and the operable semantic representation. In practical situations, however, many elements of the semantic representation may only be implied by the description, rather than explicitly stated. As we tackle domains with greater complexity, identifying implied but necessary information will be even more important. Underspecified descriptions open up new interface possibilities as well. This paper considered only single-turn interactions, where the user describes a request and the system responds with an interpretation. An important next step would be to engage the user in an interactive dialogue to confirm and refine the user’s intent and develop a fully-functional correct program. Acknowledgments The authors would like to thank William Dolan and the anonymous reviewers for their helpful advice and suggestions. 886 References Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pages 47–52, Sofia, Bulgaria, August. Association for Computational Linguistics. Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of Conference on Empirical Methods in Natural Language Processing (EMNLP-13). Taylor Berg-Kirkpatrick, Alexandre Bouchard-Cˆot´e, John DeNero, and Dan Klein. 2010. Painless unsupervised learning with features. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 582–590, Los Angeles, California, June. Association for Computational Linguistics. S.R.K. Branavan, Harr Chen, Luke S. Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learning for mapping instructions to actions. In Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP), Singapore. David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of the 43nd Annual Meeting of the Association for Computational Linguistics (ACL), pages 263–270, Ann Arbor, MI. J. Cohen. 1960. A coefficient of agreement for nominal scales. Educational and Psychological Measurement, 20(1):37 – 46. Michel Galley, Jonathan Graehl, Kevin Knight, Daniel Marcu, Steve DeNeefe, Wei Wang, and Ignacio Thayer. 2006. Scalable inference and training of context-rich syntactic translation models. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING/ACL), pages 961–968, Sydney, Australia, July. Sumit Gulwani and Mark Marron. 2014. Nlyze: Interactive programming by natural language for spreadsheet data analysis and manipulation. In SIGMOD. Rohit J. Kate and Raymond J. Mooney. 2006. Using string-kernels for learning semantic parsers. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics, pages 913–920, Sydney, Australia, July. Association for Computational Linguistics. R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), pages 1062–1068, Pittsburgh, PA, July. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1, NAACL ’03, pages 48–54, Stroudsburg, PA, USA. Association for Computational Linguistics. Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Annual Meeting of the Association for Computational Linguistics (ACL), demonstration session, Prague, Czech Republic, June. Klaus Krippendorff. 1980. Content Analysis: an Introduction to its Methodology. Sage Publications, Beverly Hills, CA. Vu Le, Sumit Gulwani, and Zhendong Su. 2013. Smartsynth: Synthesizing smartphone automation scripts from natural language. In MobiSys. Greg Little and Robert C. Miller. 2007. Keyword programming in java. In Proceedings of the Twentysecond IEEE/ACM International Conference on Automated Software Engineering, ASE ’07, pages 84– 93, New York, NY, USA. ACM. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. Journal of Machine Learning Research, 2:419–444. Franz Josef Och, Christoph Tillmann, and Hermann Ney. 1999. Improved alignment models for statistical machine translation. In Proc. of the Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, pages 20–28, University of Maryland, College Park, MD, June. Yuk Wah Wong and Raymond Mooney. 2006. Learning for semantic parsing with statistical machine translation. In Proceedings of the Human Language Technology Conference of the NAACL, Main Conference, pages 439–446, New York City, USA, June. Association for Computational Linguistics. Yuk Wah Wong and Raymond J. Mooney. 2007a. Generation by inverting a semantic parser that uses statistical machine translation. In Proceedings of Human Language Technologies: The Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT), pages 172– 179, Rochester, NY. 887 Yuk Wah Wong and Raymond J. Mooney. 2007b. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics (ACL), pages 960–967, Prague, Czech Republic, June. William A. Woods. 1977. Lunar rocks in natural English: Explorations in natural language question answering. In Antonio Zampoli, editor, Linguistic Structures Processing. Elsevier North-Holland, New York. Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377–403. John M. Zelle and Raymond J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Proceedings of the Thirteenth National Conference on Artificial Intelligence (AAAI96), pages 1050–1055, Portland, OR, August. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In In Proceedings of the 21st Conference on Uncertainty in AI, pages 658–666. 888
2015
85
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 889–898, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Deep Questions without Deep Understanding Igor Labutov Sumit Basu Lucy Vanderwende Cornell University Microsoft Research Microsoft Research 124 Hoy Road One Microsoft Way One Microsoft Way Ithaca, NY Redmond, WA Redmond, WA [email protected] [email protected] [email protected] Abstract We develop an approach for generating deep (i.e, high-level) comprehension questions from novel text that bypasses the myriad challenges of creating a full semantic representation. We do this by decomposing the task into an ontologycrowd-relevance workflow, consisting of first representing the original text in a low-dimensional ontology, then crowdsourcing candidate question templates aligned with that space, and finally ranking potentially relevant templates for a novel region of text. If ontological labels are not available, we infer them from the text. We demonstrate the effectiveness of this method on a corpus of articles from Wikipedia alongside human judgments, and find that we can generate relevant deep questions with a precision of over 85% while maintaining a recall of 70%. 1 Introduction Questions are a fundamental tool for teachers in assessing the understanding of their students. Writing good questions, though, is hard work, and harder still when the questions need to be deep (i.e., high-level) rather than factoid-oriented. These deep questions are the sort of open-ended queries that require deep thinking and recall rather than a rote response, that span significant amounts of content rather than a single sentence. Unsurprisingly, it is these deep questions that have the greatest educational value (Anderson, 1975; Andre, 1979; McMillan, 2001). They are thus a key assessment mechanism for a spectrum of online educational options, from MOOCs to interactive tutoring systems. As such, the problem of automatic question generation has long been of interest to the online education community (Mitkov and Ha, 2003; Schwartz, 2004), both as a means of providing self-assessments directly to students and as a tool to help teachers with question authoring. Much work to date has focused on questions based on a single sentence of the text (Becker et al., 2012; Lindberg et al., 2013; Mazidi and Nielsen, 2014), and the ideal of creating deep, conceptual questions has remained elusive. In this work, we hope to take a significant step towards this challenge by approaching the problem in a somewhat unconventional way. Figure 1: Overview of our ontology-crowd-relevance approach. While one might expect the natural path to generating deep questions to involve first extracting a semantic representation of the entire text, the state-of-the-art in this area is at too early a stage to achieve such a representation effectively. Rather we take a step back from full understanding, and instead propose an ontology-crowd-relevance workflow for generating high-level questions, shown in Figure 1. This involves 1) decomposing a text into a meaningful, intermediate, low-dimensional ontology, 2) soliciting high-level templates from the crowd, aligned with this intermediate representation, and 3) for a target text segment, retrieving a subset of the collected templates based 889 on its ontological categories and then ranking these questions by estimating the relevance of each to the text at hand. In this work, we apply the proposed workflow to the Wikipedia corpus. For our ontology, we use a Cartesian product of article categories (derived from Freebase) and article section names (directly from Wikipedia) as the intermediate representation (e.g. category: Person, section: Early life), henceforth referred to as category-section pairs. We use these pairs to prompt our crowd workers to create relevant templates; for instance, (Person, Early Life) might lead a worker to generate the question “Who were the key influences on <Person> in their childhood?”, a good example of the sort of deep question that can’t be answered from a single sentence in the article. We also develop classifiers for inferring these categories when explicit or matching labels are not available. Given a database of such category-section-specific question templates, we then train a binary classifier that can estimate the relevance of each to a new document. We hypothesize that the resulting ranked questions will be both high-level and relevant, without requiring full machine understanding of the text – in other words, deep questions without deep understanding. In the sections that follow, we detail the various components of this method and describe the experiments showing their efficacy at generating high-quality questions. We begin by motivating our choice of ontology and demonstrating its coverage properties (Section 3). We then describe our crowdsourcing methodology for soliciting questions and question-article relevance judgments (Section 4), and outline our model for determining the relevance of these questions to new text (Section 5). After this we describe the two datasets that we construct for the evaluation of our approach and present quantitative results (Section 6) as well as examples of our output and an error analysis (Section 7) before concluding (Section 8). 2 Related Work We consider three aspects of past research in automatic question generation: work that focuses on the grammaticality of natural language question generation, work that focuses on the semantic quality of generated questions, i.e. the “what to ask about” rather than “how to ask it,” and finally work that builds a semantic representation of text in order to generate higher-level questions. Approaches focusing on the grammaticality of question generation date back to the AUTOQUEST system (Wolfe, 1976), which examined the generation of Wh-questions from single sentences. Later systems addressing the same goal include methods that use transformation rules (Mitkov and Ha, 2003), template-based generation (Chen et al., 2009; Curto et al., 2011) and overgenerate-and-rank methods (Heilman and Smith, 2010a). Another approach has been to create fill-in-the-blank questions from single sentences to ensure grammaticality (Agarwal et al. 2011, Becker et al. 2012). More relevant to our direction is work on the semantic aspect of question generation, which has become a more active research area in the past several years. Several authors (Mazidi and Nielsen 2014; Linberg et al. 2013) generate questions according to the semantic role patterns extracted from the source sentence. Becker et al. (2012) also leverage semantic role labeling within a sentence in a supervised setting. We hope to continue in this direction of semantic focus, but extend the capabilities of question generation to include openended questions that go far beyond the scope of a single sentence. Other work has taken on the challenge of deeper questions by attempting to build a semantic representation of arbitrary text. This has included work using concept maps over keywords (Olney et al. 2012) and minimal recursion semantics (Yao 2010) to reason over concepts in the text. While the work of (Olney et al. 2012) is impressive in its possibilities, the range of the types of questions that can be generated is restricted by a relatively specific set of relations (e.g. Is-A, PartOf) captured in the ontology of the domain (biology textbook). Mannem et al. (2010) observe as we have that "capturing the exact true meaning of a paragraph is beyond the reach of current NLP systems;" thus, in their system for Shared Task A (for paragraph-level questions (Rus et al. 2010)) they make use of predicate argument structures along with semantic role labeling. However, the generation of these questions is restricted to the first sentence of the paragraph. Though motivated by the same noble impulses of these authors to achieve higher-level questions, our hope is that we can bypass the challenges and constraints of semantic parsing and generate deep questions via a more holistic approach. 890 3 An Ontology of Categories and Sections The key insight of our approach is that we can leverage an easily interpretable (for crowd workers), low-dimensional ontology for text segments in order to crowdsource a set of high-level, reusable templates that generalize well to many documents. The choice of this representation must strike a balance between domain coverage and the crowdsourcing effort required to obtain that coverage. Inasmuch as Wikipedia is deemed to have broad coverage of human knowledge, we can estimate domain coverage by measuring what fraction of that corpus is covered by the proposed representation. In our work, we have developed a category-section ontology using annotations from Freebase and Wikipedia (English), and now describe its structure and coverage in detail. For the high-level categories, we make use of the Freebase “notable type” for each Wikipedia article. In contrast to the noisy default Wikipedia categories, the Freebase “notable types” provide a clean high-level encapsulation of the topic or entity discussed in a Wikipedia article. As we wish to maximize coverage, we compute the histogram by type and take the 300 most common ones across Wikipedia. We further merge these into eight broad categories to reduce crowdsourcing effort: Person, Location, Event, Organization, Art, Science, Health, and Religion. These eight categories cover 78% of Wikipedia articles (see Figure 2a); the mapping between Freebase types and our categories will be made available as part of our corpus (see Section 8). To achieve greater specificity of questions within the articles, we make use of Wikipedia sections, which offer a high-level segmentation of the content. The Cartesian product of our categories from above and the most common Wikipedia section titles (per category) then yield an interpretable, low-dimensional representation of the article. For instance, the set of category-section pairs for an article about Albert Einstein contains (Person, Early_life), (Person, Awards), and (Person, Political_views) as well as several others. For each category, the section titles that occur most frequently represent central themes in articles belonging to that category. We therefore hypothesize that question templates authored for such high-coverage titles are likely to generalize to a large number of articles in that category. Table 1 below shows the four most frequent sections for each of our eight categories. Person Location Organization Art Early life History History Plot Career Geography Geography Reception Pers. life Economy Academics History Biography Demographics Demographics Production Science Event Health Religion Descript. Background Treatment Etymology Taxonomy Aftermath Diagnosis Icongraphy History Battle Causes Worship Distributn. Prelude History Mythology Table 1: Most frequent section titles by category. As the crowdsourcing effort is directly proportional to the size of the ontology, our goal is to select the smallest set of pairs that will provide sufficient coverage. As with categories, the cut Figure 2: Coverage properties of our category-section representation: (a) fraction of Wikipedia articles covered by the top j most common Freebase types, grouped by our eight higher-level categories. (b) Average fraction of sections covered per document if only the top k most frequent sections are used; each line represents one of our eight categories. 891 off for the number of sections used for each category is guided by the trade-off between coverage and crowdsourcing costs. Figure 2b plots the average fraction of an article covered by the top k sections from each category. We found that the top 50 sections cover 30% to 55% of the sections of an individual article (on average) across our categories. This implies that by only crowdsourcing question templates for those 50 sections per category, we would be able to ask questions about a third to a half of the sections of any article. Of course, if we were to limit ourselves to only segments with these labels at runtime, we would completely miss many articles as well as texts outside of Wikipedia. To extend our reach, we also develop the means for category and section inference from raw text in Section 5 below, for cases in which ontological labels are either not available or are not contained within our limited set. 4 Crowdsourcing Methodology We designed a two-stage crowdsourcing pipeline to 1) collect templates targeted to a set of category-section pairs and 2) obtain binary relevance judgments for the generated templates in relation to a set of article segments (for Wikipedia, these are simply sections) that match in category-section labels. We recruit Mechanical Turk workers for both stages of the pipeline, filtering for workers from the United States due to native English proficiency. A total of 307 unique workers participated in the two tasks combined (78 and 229 workers for the generation and ratings tasks respectively). Figure 3: Prompt for the generation task for the category-section pair (Person, Legacy). 4.1 Question generation task Following the coverage analysis above, we select the 50 most frequent sections for the top two categories, Person and Location, yielding 100 category-section pairs. As these two categories cover nearly 50% of all articles on Wikipedia, we believe that they suffice in demonstrating the effectiveness of the proposed methodology. For each category-section pair, we instructed 10 (median) workers to generate a question regarding a hypothetical entity belonging to the target with the prompt in Figure 3. Additional instructions and an interactive tutorial were pre-administered, guiding the workers to formulate appropriately deep questions, i.e. questions that are likely to generalize to many articles, while avoiding factoid questions like “When was X born?” In total, 995 question templates were added to our question database using this methodology (only 0.5% of all generated questions were exact repeats of existing questions). We confirm in section 4.2 that workers were able to formulate deep, interesting and relevant questions whose answers spanned more than a single sentence and that generalized to many articles using this prompt. In earlier pilots, we tried an alternative prompt which also presented the text of a specific article segment. In Figure 4, we show the average scope and relevance of questions generated by workers under both prompt conditions. As the figure demonstrates, the alternative prompt showing specific article text resulted in questions that generalized less well (workers’ questions were found to be relevant to fewer articles), likely because the details in the text distracted the workers from thinking broadly about the domain. These questions also had a smaller scope on average, i.e., answers to these questions were contained in shorter spans in the text. The differences in scope and relevance between the two prompt designs were both significant (p-values: 0.006 and 4.5e-11 respectively, via two-sided Welch’s t-tests). Figure 4: Average relevance and scope of worker-generated questions versus how the workers were prompted. 892 4.2 Question relevance rating task For our 100 category-section pairs, 4 (median) article segments within reasonable length for a Mechanical Turk task (200-1000 tokens) were drawn at random from the Wikipedia corpus; this resulted in a set of 513 article segments. Each worker was then presented with one of these segments alongside at most 10 questions from the question template database matching in categorysection; templates were converted into questions by filling in the article-specific entity extracted from the title. Workers were requested to rate each question along three dimensions: relevance, quality, and scope, as detailed below. Quality and scope ratings were only requested when the worker determined the question to be relevant.  Relevance: 1 (not relevant) – 4 (relevant) Does the article answer the question?  Quality: 1 (poor) – 4 (excellent) Is this question well-written?  Scope: 1 (single-sentence) – 4 (multi-sentence/paragraph) How long is the answer to this question? A median of 3 raters provided an independent judgment for each question-article pair. The mean relevance, quality and scope ratings across the 995 questions were 2.3 (sd=0.83), 3.5 (sd=.65) and 2.6 (sd=1.0) respectively. Note that the sample sizes for scope and quality were smaller, 774 and 778 respectively, as quality/scope judgments were not gathered for questions deemed irrelevant. We note that 80% of the relevant crowd-sourced questions had a median scope rating larger than 1 sentence, and 23% had a median scope rating of 4, defined as “the answer to this question can be found in many sentences and paragraphs,” corresponding to the maximum attainable scope rating. Note that while in this work, we have only used the scope judgments to report summary statistics about the generated questions, in future work these ratings could be used to build a scope classifier to filter out questions targeting short spans of text. As described in Section 5.2, the relevance judgments are converted to binary relevance ratings for training the relevance classifier (we consider relevance ratings {1, 2} as “not relevant” and {3, 4} as “relevant”). In terms of agreement between raters for these binary relevance labels, we obtained a Fleiss’ Kappa of 0.33, indicating fair agreement. 5 Model There are two key models to our system: the first is for category and section inference of a novel article segment, which allows us to infer the keys to our question database when explicit labels are not available. The second is for question relevance prediction, which lets us decide which question templates from the database’s store for that category-section actually apply to the text at hand. 5.1 Category/section inference Both category and section inference were cast as standard text-classification problems. Category inference is performed on the whole article, while section inference is performed on the individual article segments (i.e., sections). We trained individual logistic regression classifiers for the eight categories and the 50 top section types for each one (a total of 400) using the default L2 regularization parameter in LIBLINEAR (Fan, 2008). For section inference, a total of 736,947 article segments were sampled from Wikipedia (June 2014 snapshot), each belonging to one of the 400 section types and within the same length bounds from Section 4.2 (200-1000 tokens). For category inference, we sampled a total of 86,348 articles with at least 10 sentences and belonging to one of our eight categories. In both cases, a binary dataset was constructed for a one-against-all evaluation, where the negative instances were sampled randomly from the negative categories or sections (there was an average 17% and 32% positive skew in the section and category datasets, respectively). Basic tf-idf features (using a vocabulary of 200,000 after eliminating stopwords) were used in both text classification tasks. Applying the category/section inference to held-out portions of the dataset (30% for each category/section) resulted in balanced accuracies of 83%/95% respectively, which gave us confidence in the inference. Keep in mind that this is not a strict bound on our question generation performance, since the inferred category/section, while not matching the label perfectly, could still be sufficiently close to produce relevant questions (for instance, we could misrecognize “Childhood” as “Early Life”). We explore the ramifications of this in our end-to-end experiments in Section 6. 5.2 Relevance Classification We also cast the problem of question/article relevance prediction as one of binary classification, where we map a question-article pair to a relevance score; as such our features had to combine 893 aspects of both the question and the article. Our core approach was to use a vector of the component-wise Euclidean distances between individual features of the question and article segment, i.e., the ith feature vector component 𝑓𝑖 is given by 𝑓𝑖= (𝑞𝑖−𝑎𝑖)2, where 𝑞𝑖 and 𝑎𝑖 are the components of the question and article feature vectors. For the feature representation, we utilized a concatenation of continuous embedding features: 300 features from a Word2Vec embedding (Mikolov, 2013) and 200,000 tfidf features (as with category/section classification above). As question templates are typically short, though, we found that this representation alone performed poorly. As a result, we augmented the vector by concatenating additional distance features between the target article segment and one specific instance of an entire article for which the question applied. This augmenting article was selected at random from all those for which the template was judged to be relevant. The resulting feature vector was thus doubled in length, where the first 𝑘 distances were between the question template and the target segment, and the next 𝑘 were between the augmenting article and the target segment. Note that the augmenting article segments were removed from the training/test sets. To train this classifier, we assumed that we would be able to acquire at least 𝑛 positive relevance labels for each question template, i.e., 𝑛 article segments judged to be relevant to each template for inclusion in the training set. We explore the effect of increasing values of 𝑛, from 0 (where no relevance labels are available) to 3 (referred to as conditions T0..T3 in Figure 5). We then trained and evaluated the relevance classifier, a single logistic regression model using LIBLINEAR with default L2 regularization, using 10-fold cross-validation on DATASET I (see Section 6). Figure 5 depicts a series of ROC curves summarizing the performance of our template relevance classifier on unseen article segments. As expected, we see increasing performance with increasing 𝑛. However, the benefit drops off after 3 instances (i.e., T4 is only marginally better than T3). While the character of the curves is modest, keep in mind we are already filtering questions by retrieving them from the database for the inferred category-section (which by itself gives us a precision of .74 – see green bars in Figure 6); this ROC represents the “lift” achieved by further filtering the questions with our relevance classifier, resulting in far higher precision (.85 to .95 – see blue bars in Figure 6). Figure 5: ROC curves for the task of question-toarticle relevance prediction. Tn means that n positively labeled article segments were available for each question template during training. 6 Experiments and Results In this section, we describe the datasets used for training the relevance classifier in Section 5.2 (DATASET I) as well as for end-to-end performance on unlabeled text segments (DATASET II). We then evaluate the performance on this second dataset under three settings: first, when the category and section are known, second, when those labels are unavailable, and third, when neither the labels nor the relevance classifier are available. 6.1 DATASET I: for the Relevance Classifier The first dataset (DATASET I) was intended for training and evaluating the relevance classifier, and for this we assumed the category and section labels were known. As such, judgments were collected only for questions templates authored for a given article’s actual category and section labels. After filtering out annotations from unreliable workers (based on their pre-test results) as well as those with inter-annotator agreement below 60%, we were left with a set of 995 rated questions, spanning across two categories (Person and Location) and 50 sections per category (100 categorysection pairs total). This corresponded to a total of 4439 relevance tuples (label, question, article) where label is a binary relevance rating aggregated via majority vote across multiple raters. The relevance labels were skewed towards the positive (relevant) class with 63% relevant instances. This is of course a mostly unrealistic data setting for applications of question generation (known category and section labels), but greatly 894 useful in developing and evaluating the relevance classifier; we thus used this dataset only for that purpose (see Section 5.2 and Figure 5). 6.2 DATASET II: for End-to-End Evaluation For an end-to-end evaluation we need to examine situations where the category and section labels are not available and we must rely on inference instead. As this is the more typical use case for our method, it is critical to understand how the performance will be affected. For DATASET II, then, we first sampled articles from the Wikipedia corpus at random (satisfying the constraints described in Section 3) and then performed category and section inference on the article segments. The category c with the highest posterior probability was chosen as the inferred category, while all section types 𝑠𝑖 with a posterior probability greater than 0.6 were considered as sources for templates. Only articles whose inferred category was Person or Location were considered, but given the noise in inference there was no guarantee that the true labels were of these categories. We continued this process until we retrieved a total of 12 articles. For each article segment in these 12, we drew a random subset of at most 20 question templates from our database matching the inferred category and section(s), then ordered them by their estimated relevance for presentation to judges. We then solicited an additional 62 Mechanical Turk workers to a rating task set up according to the same protocol as for DATASET I. After aggregation and filtering in the same way, the second dataset contained a total 256 (label, question, article) relevance tuples, skewed towards the positive class with 72% relevant instances. 6.3 Information Retrieval–based Evaluation As our end-to-end task is framed as the retrieval of a set of relevant questions for a given article segment, we can measure performance in terms of an information retrieval-based metric. Consider a user who supplies an article segment (the “query” in IR terms) for which she wants to generate a quiz: the system then presents a ranked list of retrieved questions, ordered according to their estimated relevance to the article. As she makes her way down this ranked list of questions, adding a question at a time to the quiz (set Q), the behavior of the precision and recall (with respect to relevance to the article segment) of the questions in Q, summarizes the performance of the retrieval system (i.e. the Precision-Recall (PR) curve (Manning, 2008)). We summarize the performance of our system by averaging the individual article segments’ PR curves (linearly interpolated) from DATASET II, and present the average precision over bins of recall values in Figure 6. We consider the following experimental conditions:  Known category/section, using relevance classifier (red): This is the case in which the actual category and section labels of the query article are known, and only the questions that match exactly in category and section are considered for relevance classification (i.e. added to Q if found relevant by the classifier). Recall is computed with respect to the total number of relevant questions in DATASET II, including those corresponding to sections different from the section label of the article.  Inferred category/section, using relevance classifier (blue): This is the expected use case, where the category/section labels are not known. Questions matching in category and section(s) to the inferred category and section of each article are considered and ranked in Q by their score from the relevance classifier. Recall is computed with respect to the total number of relevant questions in DATASET II.  Inferred category/section, ignoring relevance classifier (green): This is a baseline where we only use category/section inference and then retrieve questions from the database without filtering: all questions that match in inferred category and section(s) of the article are added to Q in a random ranking order, without performing relevance classification. As we examine Figure 6, it is important to point out a subtlety in our choice to calculate recall of the known category/section condition (red bars) with respect to the set of all relevant questions, including those that are matched to sections different from the original (labeled) sections. While this condition by construction does not have access to questions of any other section, the resulting limitation in recall underlines the importance of performing section inference: without inference, we achieve a recall of no greater than 0.4. As we had hypothesized, while the labels of the sections play an instrumental role in instructing the crowd to generate relevant questions, the resulting questions often tend to be relevant to content found under different but semantically related sections as well. Leveraging the available questions of these related sections (by performing inference) boosts recall at the expense of only a small degree of precision (blue bars). If we forgo relevance classification entirely, we get a constant precision of 0.74 (green bars) as mentioned in 895 Section 5.2; it is clear that the relevance classifier results in a significant advantage. While there is a slight drop in precision when using inference, this is at least partly due to the constraints that were imposed during data-collection and relevance classifier training, i.e., all pairs of articles and questions belonged to the same category and section. While this constraint made the crowdsourcing methodology proposed in this work tractable, it also prevented the inclusion of training examples for sections that could potentially be inferred at test time. One possible approach to remedy this would be sample from article segments that are similar in text (in terms of our distance metric) as opposed to only segments exactly matching in category and section. Figure 6: Precision-recall results for the end-toend experiment, grouped in bins of recall ranges. 7 Examples and Error Analysis In Table 2 we show a set of sample retrieved questions and the corresponding correctness of the relevance classifier’s decision with respect to the judgment labels; examining the errors yields some interesting insights. Consider the false positive example shown in row 8, where the category correctly inferred as Location, but section title was inferred as Transportation instead of Services. This mismatch resulted in the following template authored for (Location, Transportation) being retrieved: "What geographic factors influence the preferred transport methods in <entity>?" To the relevance classifier, this particular template (containing the word “transport”) appears to be relevant on the surface level to the text of an article segment about schedules (Services) at a railway station. However, as this template never appeared to judges in the context of a Services segment – a section that differs considerably in theme from the inferred section (Transportation) – the relevance classifier unsurprisingly makes the wrong call. True section Inferred section Result Generated Question Honours Later Life TP What accomplishments characterized the later career of Colin Cowdrey? Acting Career Television TP How did Corbin Bernstein’s television career evolve over time? Route Description Geography TP What are some unique geographic features of Puerto Rico Highway 10? Athletics Athletics TN How much significance do people of DeMartha Catholic High School place on athletics? Route Description Geography TN How does the geography of Puerto Rico Highway 10 impact its resources? Work Reception FN What type of reaction did Thornton Dial receive? Acting Career Later Career FP What were the most important events in the later career of Corbin Berstein? Services Transportation FP What geographic factors influence the preferred transport methods in Weymouth Railway Station? Later Career Legacy FP How has Freddy Mitchell’s legacy shaped current events? Table 2: Examples of retrieved questions. TP, TN, FP, FN stand for true/false positive/negative with respect to the relevance classification. In considering additional sources of relevance classification errors, recall that we employ a single relevant article segment for the purpose of augmenting a template’s feature representation. In the case of the false negative example (row 6 in Table 2), the sensitivity of the classifier to the particular augmenting article used is apparent. Upon inspecting the target article segment (article: Thornton Dial, section: Work), and the augmenting article segment (article: Syed Masood, section: Reception), it’s clear that the inferred section Reception is a reasonable title for the Work section of the article on Thornton Dial, making the question “What type of reaction did Thornton Dial receive?” a relevant question to the target article (as reflected in the human judgment). However, although both segments generally talk about “reception,” the language across the two segments is distinct: the critical reception of Thornton Dial the visual artist is described in a different way from the reception of Syed Masood the actor, resulting in little overlap in surface text, and as a result the relevance classifier falsely rejects the question. 896 Reasonable substitutions for inferred sections can also lead to false positives, as in row 9, for the article Freddy Mitchell. In this case, while Legacy (the inferred section) is a believable substitute for the true label of Later Career, in this case the article segment did not discuss his legacy. However, there was a good match between the augmenting article for this template and the section. We hypothesize that in both this and the previous examples a broader sample of augmenting article segments for each category/section is likely to be effective at mitigating these types of errors. 8 Conclusion We have presented an approach for generating relevant, deep questions that are broad in scope and apply to a wide range of documents, all without constructing a detailed semantic representation of the text. Our three primary contributions are 1) our insight that a low-dimensional ontological document representation can be used as an intermediary for retrieving and generalizing high-level question templates to new documents, 2) an efficient crowdsourcing scheme for soliciting such templates and relevance judgments (of templates to article) from the crowd in order to train a relevance classification model, and 3) using category/section inference and relevance prediction to retrieve and rank relevant deep questions for new text segments. Note that the approach and workflow presented here constitute a general framework that could potentially be useful in other language generation applications. For example, a similar setup could be used for high-level summarization, where question templates would be replaced with “summary snippets.” Finally, to encourage the community to further explore this approach as well as to compare it with others, we are releasing all of our data (category mappings, generated templates, and relevance judgments) at http://research.microsoft.com/~sumitb/questiongeneration . References Manish Agarwal, Rakshit Shah, and Prashanth Mannem. 2011. Automatic Question Generation Using Discourse Cues. In Proceedings of the 6th Workshop on Innovative Use of NLP for Building Educational Applications. Richard C. Anderson and W. Barry Biddle. 1975. On Asking People Questions About What they are Reading. Psychology of Learning and Motivation. 9:90-132. Thomas Andre. 1979. Does Answering Higher-level Questions while Reading Facilitate Productive Learning? Review of Educational Research 49(2): 280-318. Lee Becker, Sumit Basu, and Lucy Vanderwende. 2012. Mind the Gap: Learning to Choose Gaps for Question Generation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Wei Chen, Gregory Aist, and Jack Mostow. 2009. Generating Questions Automatically from Informational Text. In S. Craig & S. Dicheva (Ed.), Proceedings of the 2nd Workshop on Question Generation. Sérgio Curto, Ana Cristina Mendes, and Luisa Coheur. 2011. Exploring Linguistically-rich Patterns for Question Generation. In Proceedings of the UCNLG+Eval: Language Generation and Evaluation Workshop. Rong-En Fan, Kai-Wei Chang, Cho-Jui Hsieh, XiangRui Wang, Chih-Jen Lin. 2008. LIBLINEAR: A library for large linear classification. Journal of Machine Learning Research 9: 1871-1874. Michael Heilman and Noah Smith. 2010. Good Question! Statistical Ranking for Question Generation. In Proceedings of NAACL/HLT. David Lindberg, Fred Popowich, John Nesbit, and Phil Winne. 2013. Generating Natural Language Questions to Support Learning On-line. In Proceedings of the 14th European Workshop on Natural Language Generation. Prashanth Mannem, Rashmi Prasad, and Aravind Joshi. 2010. Question generation from paragraphs at UPenn: QGSTEC system description. In Proceedings of the Third Workshop on Question Generation. Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schutze. 2008. Introduction to Information Retrieval. Cambridge: Cambridge university press Karen Mazidi and Rodney D. Nielsen. 2014. Linguistic Considerations in Automatic Question Generation. In Proceedings of ACL. James H. McMillan. 2001. Secondary Teachers' Classroom Assessment and Grading Practices." Educational Measurement: Issues and Practice 20(1): 2032. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Corrado, and Jeff Dean. 2013. Distributed Representations of Words and Phrases and their Compositionality. In Proceedings of Advances in Neural Information Processing Systems. Ruslan Mitkov and Le An Ha. 2003. Computer-Aided Generation of Multiple-Choice Tests. In Proceed897 ings of the HLT-NAACL 2003 Workshop on Building Educational Applications Using Natural Language Processing. Andrew M. Olney, Arthur C. Graesser, and Natalie K. Person. 2012. Question Generation from Concept Maps. Dialogue & Discourse 3(2): 75-99. Daniel Ramage, David Hall, Ramesh Nallapati, and Christopher D. Manning. 2009. Labeled LDA: A Supervised Topic Model for Credit Attribution in Multi-labeled Corpora. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Vasile Rus, Brendan Wyse, Paul Piwek, Mihai Lintean, Svetlana Stoyanchev, and Cristian Moldovan. 2010. Overview of The First Question Generation Shared Task Evaluation Challenge. In Proceedings of the Third Workshop on Question Generation. Lee Schwartz, Takako Aikawa, and Michel Pahud. 2004. Dynamic Language Learning Tools. In Proceedings of STIL/ICALL Symposium on Computer Assisted Learning. John H. Wolfe. 1976. Automatic Question Generation from Text - an Aid to Independent Study. In Proceedings of ACM SIGCSE-SIGCUE Joint Symposium on Computer Science Education. Xuchen Yao and Yi Zhang. 2010. Question generation with minimal recursion semantics. In Proceedings of QG2010: The Third Workshop on Question Generation. 898
2015
86
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 899–908, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics The NL2KR Platform for building Natural Language Translation Systems Nguyen H. Vo, Arindam Mitra and Chitta Baral School of Computing, Informatics and Decision Systems Engineering Arizona State University {nguyen.h.vo, amitra7, chitta }@asu.edu Abstract This paper presents the NL2KR platform to build systems that can translate text to different formal languages. It is freelyavailable1, customizable, and comes with an Interactive GUI support that is useful in the development of a translation system. Our key contribution is a userfriendly system based on an interactive multistage learning algorithm. This effective algorithm employs Inverse-λ, Generalization and user provided dictionary to learn new meanings of words from sentences and their representations. Using the learned meanings, and the Generalization approach, it is able to translate new sentences. NL2KR is evaluated on two standard corpora, Jobs and GeoQuery and it exhibits state-of-the-art performance on both of them. 1 Introduction and Related Work For natural language interaction with systems one needs to translate natural language text to the input language of that system. Since different systems (such as a robot or database system) may have different input language, we need a way to translate natural language to different formal languages as needed by the application. We have developed a user friendly platform, NL2KR, that takes examples of sentences and their translations (in a desired target language that varies with the application), and some bootstrap information (an initial lexicon), and constructs a translation system from text to that desired target language. 1http://nl2kr.engineering.asu.edu/ Our approach to translate natural language text to formal representation is inspired by Montague’s work (Montague, 1974) where the meanings of words and phrases are expressed as λ-calculus expressions and the meaning of a sentence is built from semantics of constituent words through appropriate λ-calculus (Church, 1936) applications. A major challenge in using this approach has been the difficulty of coming up with the λ-calculus representation of words. Montague’s approach has been widely used in (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2010) to translate natural language to formal languages. In ZC05 (Zettlemoyer and Collins, 2005) the learning algorithm requires the user to provide the semantic templates for all words. A semantic template is a λ-expression (e.g. λx.p(x) for an arity one predicate), which describes a particular pattern of representation in that formal language. With all these possible templates, the learning algorithm extracts the semantic representation of the words from the formal representation of a sentence. It then associates the extracted meanings to the words of the sentence in all possible ways and ranks the associations according to some goodness measure. While manually coming up with semantic templates for one target language is perhaps reasonable, manually doing it for different target languages corresponding to different applications may not be a good idea as manual creation of semantic templates requires deep understanding of translation to the target language. This calls for automating this process. In UBL (Kwiatkowski et al., 2010) this process is automated by restricting the choices of formal representation and learning the meanings in a brute force manner. Given, a sentence S and its representation M in the restricted formal language, 899 it breaks the sentence into two smaller substrings S1, S2 and uses higher-order unification to compute two λ-terms M1, M2 which combines to produce M. It then recursively learns the meanings of the words, from the sub-instance < S1, M1 > and < S2, M2 >. Since, there are many ways to split the input sentence S and the choice of M1, M2 can be numerous, it needs to consider all possible splittings and their combinations; which produces many spurious meanings. Most importantly, their higher-order unification algorithm imposes various restrictions (such as limited number of conjunctions in a sentence, limited forms of functional application) on the meaning representation language which severely limits its applicability to new applications. Another common drawback of these two algorithms is that they both suffer when the test sentence contains words that are not part of the training corpus. Our platform NL2KR uses a different automated approach based on Inverse-λ (section 2.1) and Generalization (section 2.2) which does not impose such restrictions enforced by their higherorder unification algorithm. Also, Generalization algorithm along with Combinatory Categorical Grammar (Steedman, 2000) parser, allows NL2KR to go beyond the training dictionary and translate sentences which contain previously unseen words. The main aspect of our approach is as follows: given a sentence, its semantic representation and an initial dictionary containing the meaning of some words, NL2KR first obtains several derivation of the input sentence in Combinatory Categorical Grammar (CCG). Each CCG derivation tree describes the rules of functional application through which constituents combine with each other. With the user provided initial dictionary, NL2KR then traverses the tree in a bottomup fashion to compute the semantic expressions of intermediate nodes. It then traverses the augmented tree in a top-down manner to learn the meaning of missing words using Inverse-λ (section 2.1). If Inverse-λ is not sufficient to learn the meaning of all unknown words, it employs Generalization (section 2.2) to guess the meanings of unknown words with the meaning of known similar words. It then restarts the learning process with the updated knowledge. The learning process stops if it learns the meanings of all words or fails to learn any new meaning in an iteration. In the latter case, it shows the augmented tree to the user. The user can then provide meanings of some unknown words and resumes the learning process. Another distinguishing feature of NL2KR is its user-friendly interface that helps users in creating their own translation system. The closest system to NL2KR is the UW Semantic Parsing Framework (UW SPF) (Artzi and Zettlemoyer, 2013) which incorporates the algorithms in (Zettlemoyer and Collins, 2005; Kwiatkowski et al., 2010) . However, to use UW SPF for the development of a new system, the user needs to learn their coding guidelines and needs to write new code in their system. NL2KR does not require the users to write new code and guides the development process with its rich user interface. We have evaluated NL2KR on two standard datasets: GeoQuery (Tang and Mooney, 2001) and Jobs (Tang and Mooney, 2001). GeoQuery is a database of geographical questions and Jobs contains sentences with job related query. Experiments demonstrate that NL2KR can exhibit stateof-the-art performance with fairly small initial dictionary. The rest of the paper is organized as follows: we first present the algorithms and architecture of the NL2KR platform in section 2; we discuss about the experiments in section 3; and finally, we conclude in section 4. 2 Algorithms and Architecture The NL2KR architecture (Figure 1) has two subparts which depend on each other (1) NL2KRL for learning and (2) NL2KR-T for translation. The NL2KR-L sub-part takes the following as input: (1) a set of training sentences and their target formal representations, and (2) an initial lexicon or dictionary consisting of some words, their CCG categories, and their meanings in terms of λcalculus expressions. It then constructs the CCG parse trees and uses them for learning of word meanings. Learning of word meanings is done by using Inverse-λ and Generalization (Baral et al., 2012; Baral et al., 2011) and ambiguity is addressed by a Parameter Learning module that learns the weights of the meanings. The learned meanings update the lexicon. The translation sub-part uses this updated lexicon to get the meaning of all the words in a new sentence, and combines them to get the meaning of the new sentence. Details of each module will be presented in the following subsections. 900 Figure 1: Architecture of NL2KR The NL2KR platform provides a GUI (Figure 2) with six features: λ-application, Inverse-λ, Generalization, CCG-Parser, NL2KR-L and NL2KRT. The fourth feature is a stand-alone CCG parser and the first four features can help on user with constructing the initial lexicon. The user can then use NL2KR-L to update the lexicon using training data and the NL2KR-T button then works as a translation system. 2.1 Inverse-λ Inverse-λ plays a key role in the learning process. Formally, given two λ-expressions H and G with H = F@G or H = G@F, the Inverse-λ operation computes the λ expression F. For example, given the meaning of “is texas” as λx2.x2@stateid(texas) and the meaning of “texas” as stateid(texas), with the additional information that “is” acts as the function while “texas” is the argument, the Inverse-λ algorithm computes the meaning of “is” as λx3.λx2.x2@x3 (Figure 4). NL2KR implements the Inverse-λ algorithm specified in (Baral et al., 2012). The Inverse-λ module is separately accessible through the main GUI (Figure 2). 2.2 Generalization Generalization (Baral et al., 2012; Baral et al., 2011) is used when Inverse-λ is not sufficient to learn new semantic representation of words. In contrast to Inverse-λ which learns the exact meaning of a word in a particular context, Generalization learns the meanings of a word from similar words with existing representations. Thus, Generalization helps NL2KR to learn meanings of words that are not even present in the training data set. In the current implementation, two words are considered as similar if they have the exact same CCG category. As an example, if we want to generalize the meaning of the word “plays” with CCG category (S\NP)/NP) and the lexicon already contains an entry for “eats” with the same CCG category, and the meaning λy.λx.eats(x, y), the algorithm will extract the template λy.λx.WORD(x, y) and apply the template to plays to get the meaning λy.λx.plays(x, y). 2.3 Combinatory Categorial Grammar Derivation of a sentence in Combinatory Categorial Grammar (CCG) determines the way the constituents combine together to establish the meaning of the whole. CCG is a type of phrase structure grammar and clearly describes the predicateargument structure of constituents. Figure 3 shows an example output of NL2KR’s CCG parser. In the figure, “John” and “home” have the category [N] (means noun) and can change to [NP] (means noun phrase). The phrase“walk home” has the category [S\NP], which means that it can combine with a constituent with category [NP] (“John” in this case) from left with the backward application to form category [S] (sentence). The word “walk” has the category [(S\NP)/NP], which means it can combine with a constituent with category [NP] (“home”) from right through the forward application combinator to form category [S\NP] (of “walk home”). A detailed description on CCG goes beyond the scope of this paper (see (Steedman, 2000) for more details). Since, natural language sentences can have various CCG parse trees, each expressing a different meaning of the sentence, a key challenge 901 Figure 2: NL2KR’s main GUI, Version 1.7.0001 Figure 3: CCG parse tree of ”John walked home”. in the learning and the translation process is to find a suitable CCG parse tree for a sentence in natural language. We overcome this impediment by allowing our learning and translation subsystem to work with multiple weighted parse trees for a given sentence and determining on the fly, the one that is most suitable. We discuss more on this in sections 2.4-2.6. Existing CCG parsers (Curran et al., 2007; Lierler and Sch¨uller, 2012) either return a single best parse tree for a given sentence or parse it in all possible ways with no preferential ordering among them. In order to overcome this shortcoming and generate more than one weighted candidate parse trees, we have developed a new parser using beam search with Cocke-Younger-Kasami(CYK) algorithm. NL2KRs CCG parser uses the C&C model (Curran et al., 2007) and constraints from the Stanford parser (Socher et al., 2013; Toutanova et al., 2003) to guide the derivation of a sentence. The output of the CCG parser is a set of k weighted parse trees, where the parameter k is provided by the user. NL2KR system allows one to use the CCG parser independently through the interactive GUI. The output graphs look like the one in Figure 3. It can be zoomed in/out and its nodes can be moved around, making it easier to work with complex sentences. 2.4 Multistage learning approach Learning meanings of words is the major component of our system. The inputs to the learning module are a list of training sentences, their target formal representations and an initial lexicon consisting of triplets of the form <word, CCG category, meaning>, where meanings are represented in terms of λ-calculus expressions. The output of the algorithm is a final dictionary containing a set of 4-tuples (word, CCG category, meaning, weight). Interactive Multistage Learning Algorithm (IMLA) NL2KR employs an Interactive Multistage Learning Algorithm (Algorithm 1) that runs many iterations on the input sentences. In each iteration, it goes through one or more of the following stages: Stage 1 In Stage 1, it gets all the unfinished sentences. It then employs Bottom Up-Top Down algorithm (Algorithm 2) to learn new meanings (by Inverse-λ). For a sentence, if it has computed the meanings of all its constituents, which can be combined to produce the given representation, that sentence is considered as learned. Each 902 Algorithm 1 IMLA algorithm 1: function IMLA(initLexicon,sentences, sentsMeanings) 2: regWords ←∅ 3: generalize ←false 4: lexicon ←initLexicon 5: repeat 6: repeat 7: repeat 8: for all s ∈sentences do 9: newMeanings ← BT(s,lexicon,sentsMeanings) 10: lexicon ←lexicon ∪newMeanings 11: for all n ∈newMeanings do 12: ms ←GENERALIZE(regWords, n) 13: lexicon ←lexicon ∪ms 14: end for 15: end for 16: until newMeanings = ∅ 17: if generalize=false then 18: generalize ←true 19: for all t ∈unfinishedSents do 20: words ←GETALLWORDS(t) 21: ms ←GENERALIZE(words) 22: lexicon ←lexicon ∪ms 23: regWords ←regWords ∪words 24: end for 25: end if 26: until newMeanings = ∅ 27: INTERATIVELEARNING 28: until unfinishedSents = ∅OR userBreak 29: lexicon ← PARAMETERESTIMATION(lexicon,sentences) 30: return lexicon 31: end function new meaning learned by this process is used to generalize the words in a waiting list. Initially, this waiting list is empty and is updated in stage 2. When no more new meaning can be learned by Bottom Up-Top Down algorithm, IMLA (Algorithm 1) enters stage 2. Stage 2 In this stage, it takes all the sentences for which the learning is not yet finished and applies Generalization process on all the words of those sentences. At the same time, it populates those words into the waiting list, so that from now on, Bottom Up-Top Down will try to generalize new meanings for them when it learns some new meanings. It then goes back to stage 1. Next time, after exiting stage 1, it directly goes to stage 3. Stage 3 When both aforementioned stages can not learn all the sentences, the Interactive Learning process is invoked and all the unfinished sentences are shown on the interactive GUI (Figure 4). Users can either skip or provide more information on the GUI and the learning process is continued. After finishing all stages, IMLA (Algorithm 1) calls Parameter Estimation (section 2.5) algorithm to compute the weight of each lexicon tuple. Bottom Up-Top Down learning For a given sentence, the CCG parser is used for the CCG parse trees like the one of how big is texas in Figure 4. For each parse tree, two main processes are called, namely “bottom up” and “top down”. In the first process, all the meanings of the words in the sentences are retrieved from the lexicon. These meanings are populated in the leaf nodes of a parse tree (see Figure 4), which are combined in a bottom-up manner to compute the meanings of phrases and full sentences. We call these meanings, the current meanings. In the “top down” process, using Inverse-λ algorithm, the given meaning of the whole sentence (called the expected meaning of the sentence) and the current meanings of the phrases, we calculate the expected meanings of each of the phrases from the root of the tree to the leaves. For example, given the expected meaning of how big is texas and the current meaning of how big, we use Inverse-λ algorithm to get the meaning (expected) of is texas. This expected meaning is used together with current meanings of is (texas) to calculate the expected meanings of texas (is). The expected meanings of the leaf nodes we have just learned will be saved to the lexicon and will be used in the other sentences and in subsequent learning iteration. The “top down” process is stopped when the expected meanings are same as the current meanings. And in both “bottom up” and “top-down” processes, the beam search algorithm is used to speed-up the learning process. Interactive learning In the interactive learning process it opens a GUI which shows the unfinished sentences. Users can see the current and expected meanings for the unfinished sentences. When the user gives additional meanings of word(s), the λapplication or Inverse-λ operation is automatically performed to update the new meaning(s) to related 903 Figure 4: Interactive learning GUI. The box under each node show: the corresponding phrases [CCG category], the expected meanings and the current meanings. Click on the red node will show the window to change the current meaning (CLE) Algorithm 2 BottomUp-TopDown (BT) algorithm 1: function BT( sentence, lexicon, sentsMeanings) 2: parseTrees ←CCGPARSER(sentence) 3: for all tree ∈parseTrees do 4: t ←BOTTOMUP(tree,lexicon) 5: TOPDOWN(t,sentsMeanings) 6: end for 7: end function word(s). Once satisfied, the user can switch back to the automated learning mode. Example Let us consider the question “How big is texas?” with meaning answer(size(stateid(texas))) (see Figure 4). Let us assume that the initial dictionary has the following entries: how := NP/(N/N) : λx.λy.answer(x@y), big := N/N : λx.size(x) and texas := NP : stateid(texas). The algorithm then proceeds as follows. First, the meanings of “how” and “big” are combined to compute the current meaning of “how big” := NP : λx.answer(size(x)) in the “bottom up” process. Since the meaning of “is” is unknown, the current meaning of “is texas” still remains unknown. It then starts the “top down” process where it knows the expected meaning of “How big is texas” := S : answer(size(stateid(texas))) and the current meaning of “how big”. Using them in the Inverse-λ algorithm, it then compute the meaning of “is texas” := S\NP : λx1.x1@stateid(texas). Using this expected meaning and current meaning of “texas” := NP : stateid(texas), it then calculates the expected meaning of “is” as “is” := (S\NP)/NP : λx2.λx1.x1@x2. This newly learned expected meaning is then saved into the lexicon. Since the meaning of all the words in the question are known, the learning algorithm stops here and the Interactive Learning is never called. If initially, the dictionary contains only two meanings: “big” := N/N : λx.size(x) and “texas” := NP : stateid(texas), NL2KR tries to first learn the sentence but fails to learn the complete sentence and switches to Interactive Learning which shows the interactive GUI (see Figure 4). If the user specifies that “how” means λx.λy.answer(x@y), NL2KR combines its meaning with the meaning of “big” to get the meaning “how big” := NP : λx.answer(size(x)). It will then use Inverseλ to figure out the meaning of “is texas” and then the meaning of “is”. Now all the meanings are combined to compute the current meaning answer(size(stateid(texas))) of “How big is texas”. This meaning is same as the expected 904 meaning, so we know that the sentence is successfully learned. Now, the user can press Retry Learning to switch back to automated learning. 2.5 Parameter Estimation The Parameter Estimation module estimates a weight for each word-meaning pair such that the joint probability of the training sentences getting translated to their given representation is maximized. It implements the algorithm described in Zettlemoyer et. al.(2005). 2.6 Translation The goal of this module is to convert input sentences into the target formalism using the lexicon previously learned. The algorithm used in Translation module (Algorithm 3) is similar to the bottom-up process in the learning algorithm. It first obtains several weighted CCG parse trees of the input sentence. It then computes a formal representation for each of the parse trees using the learned dictionary. Finally, it ranks the translations according to the weights of word-meaning pairs and the weights of the CCG parse trees. However, test sentences may contain words which were not present in the training set. In such cases, Generalization is used to guess the meanings of those unknown words from the meanings of the similar words present in the dictionary. Algorithm 3 Translation algorithm 1: function TRANSLATE(sentence, lexicon) 2: candidates ←∅ 3: parseTrees ←CCGPARSER(sentence) 4: for all tree ∈parseTrees do 5: GENERALIZE(tree); 6: t ←BOTTOMUP(tree) 7: candidates ←candidates ∪t 8: end for 9: output ←VERIFY-RANK(candidates) 10: return output 11: end function 3 Experimental Evaluation We have evaluated NL2KR on two standard corpora: GeoQuery and Jobs. For both the corpus, the output generated by the learned system has been considered correct if it is an exact replica of the logical formula described in the corpus. We report the performance in terms of precision (percentage of returned logical-forms that are correct), recall (percentage of sentences for which the correct logical-form was returned), F1-measure (harmonic mean of precision and recall) and the size of the initial dictionary. We compare the performance of our system with recently published, directly-comparable works, namely, FUBL (Kwiatkowski et al., 2011), UBL (Kwiatkowski et al., 2010), λ-WASP (Wong and Mooney, 2007), ZC07 (Zettlemoyer and Collins, 2007) and ZC05 (Zettlemoyer and Collins, 2005) systems. 3.1 Corpora GeoQuery GeoQuery (Tang and Mooney, 2001) is a corpus containing questions on geographical facts about the United States. It contains a total of 880 sentences written in natural language, paired with their meanings in a formal query language, which can be executed against a database of the geographical information of the United States. We follow the standard training/testing split of 600/280. An example sentence meaning pair is shown below. Sentence: How long is the Colorado river? Meaning: answer(A,(len(B,A),const(B, riverid(colorado)), river(B))) Jobs The Jobs (Tang and Mooney, 2001) dataset contains a total of 640 job related queries written in natural language. The Prolog programming language has been used to represent the meaning of a query. Each query specifies a list of job criteria and can be directly executed against a database of job listings. An example sentence meaning pair from the corpus is shown below. Question: What jobs are there for programmers that know assembly? Meaning: answer(J,(job(J),title(J,T), const(T,’Programmer’),language(J,L), const(L,’assembly’)))) The dataset contains a training split of 500 sentences and a test split of 140 sentences. 3.2 Initial Dictionary Formulation GeoQuery For GeoQuery corpus, we manually selected a set of 100 structurally different sentences from the training set and initiated the learning process with a dictionary containing the repre905 GUI Driven Initial Dictionary Learned Dictionary ♯<word, category > 31 118 401 ♯<word, category, meaning> 36 127 1572 ♯meaning 30 89 819 Table 1: Comparison of Initial and Learned dictionary for GeoQuery corpus on the basis of the number of entries in the dictionary, number of unique <word, CCG category> pairs and the number of unique meanings across all the entries. “GUI Driven” denotes the amount of the total meanings given through interactive GUI and is a subset of the Initial dictionary. GUI Driven Initial Dictionary Learned Dictionary ♯<word, category> 58 103 226 ♯<word, category, meaning> 74 119 1793 ♯meaning 57 71 940 Table 2: Comparison of Initial and Learned dictionary for Jobs corpus. sentation of the nouns and question words. These meanings were easy to obtain as they follow simple patterns. We then trained the translation system on those selected sentences. The output of this process was used as the initial dictionary for training step. Further meanings were provided on demand through interactive learning. A total of 119 word meanings tuples (Table 1, ♯<word, category, meaning >) were provided from which the NL2KR system learned 1793 tuples. 45 of the 119 were representation of nouns and question words that were obtained using simple patterns. The remaining 74 were obtained by a human using the NL2KR GUI. These numbers illustrate the usefulness of the NL2KR GUI as well as the NL2KR learning component. One of our future goals is to further automate the process and reduce the GUI interaction part. Table 1 compares the initial and learned dictionary for GeoQuery on the basis of number of unique <word, category, meaning> entries in dictionary, number of unique <word, category> pairs and the number of unique meanings across all the entries in the dictionary. Since each unique <word, CCG category> pair must have at least one meaning, the total number of unique <word, category> pairs in the training corpus provides a lower bound on the size of the ideal output dictionary. However, one <word, category> pair may have multiple meanings, so the ideal dictionary can be much bigger than the number of unique <word, category> pairs. Indeed, there were many words such as “of”, “in” that had multiple meanings for the same CCG category. Table 1 clearly describes that the amount of initial effort is substantially less compared to the return. Jobs For the Jobs dataset, we followed a similar process as in the GeoQuery dataset. A set of 120 structurally different sentences were selected and a dictionary was created which contained the representation of the nouns and the question words from the training corpus. A total of 127 word meanings were provided in the process. Table 2 compares the initial and learned dictionary for Jobs. Again, we can see that the amount of initial effort is substantially less in comparison to the return. 3.3 Precision, Recall and F1-measure Figure 5: Comparison of Precision, Recall and F1-measure on GeoQuery and Jobs dataset. Table 3, Table 4 and Figure 5 present the comparison of the performance of NL2KR on the GeoQuery and Jobs domain with other recent works. NL2KR obtained 91.1% precision value, 92.1% 906 System Precision Recall F1 ZC05 0.963 0.793 0.87 ZC07 0.916 0.861 0.888 λ-WASP 0.9195 0.8659 0.8919 UBL 0.885 0.879 0.882 FUBL 0.886 0.886 0.886 NL2KR 0.911 0.921 0.916 Table 3: Comparison of Precision, Recall and F1-measure on GeoQuery dataset. recall value and a F1-measure of 91.6% on GeoQuery (Figure 5, Geo880) dataset. For Jobs corpus, the precision, recall and F1-measure were 95.43%, 94.03% and 94.72% respectively. In all cases, NL2KR achieved state-of-the-art recall and F1 measures and it significantly outperformed FUBL (the latest work on translation systems) on GeoQuery. For both GeoQuery and Jobs corpus, our recall is significantly higher than existing systems because meanings discovered by NL2KRs learning algorithm is more general and reusable. In other words, meanings learned from a particular sentence are highly likely to be applied again in the context of other sentences. It may be noted that, larger lexicons do not necessarily imply higher recall as lambda expressions for two phrases may not be suitable for functional application, thus failing to generate any translation for the whole. Moreover, the use of a CCG parser maximizes the recall by exhibiting consistency and providing a set of weighted parse trees. By consistency, we mean that the order of the weighted parse tree remains same over multiple parses of the same sentence and the sentences having similar syntactic structures have identical ordering of the derivations, thus making Generalization to be more effective in the process of translation. The sentences for which NL2KR did not have a translation are the ones having structural difference with the sentences present in the training dataset. More precisely, their structure was not identical with any of the sentences present in the training dataset or could not be constructed by combining the structures observed in the training sentences. We analyzed the sentences for which the translated meaning did not match the correct one and observed that the translation algorithm selected the wrong meaning, even though it discovered the correct one as one of the possible meanings the System Precision Recall F1 ZC05 0.9736 0.7929 0.8740 COCKTAIL 0.9325 0.7984 0.8603 NL2KR 0.9543 0.9403 0.9472 Table 4: Comparison of Precision, Recall and F1-measure on Jobs dataset. sentence could have had in the target formal language. Among the sentences for which NL2KR returned a translation, there were very few instances where it did not discover the correct meaning in the set of possible meanings. It may be noted that even though our precision is lower than ZC05 and very close to ZC07 and WASP; we have achieved significantly higher F1 measure than all the related systems. In fact, ZC05, which achieves the best precision for both the datasets, is better by a margin of only 0.019 on the Jobs dataset and 0.052 on the GeoQuery dataset. We think one of the main reasons is that it uses manually predefined lambdatemplates, which we try to automate as much as possible. 4 Conclusion NL2KR is a freely available2, user friendly, rich graphical platform for building translation systems to convert sentences from natural language to their equivalent formal representations in a wide variety of domains. We have described the system algorithms and architecture and its performance on the GeoQuery and Jobs datasets. As mentioned earlier, the NL2KR GUI and the NL2KR learning module help in starting from a small initial lexicon (for example, 119 in Table 2) and learning a much larger lexicon (1793 in Table 2). One of our future goals is to reduce the initial lexicon to be even smaller by further automating the NL2KR GUI interaction component . Acknowledgements We thank NSF for the DataNet Federation Consortium grant OCI-0940841 and ONR for their grant N00014-13-1-0334 for partially supporting this research. 2More examples and a tutorial to use NL2KR are available in the download package. 907 References Yoav Artzi and Luke Zettlemoyer. 2013. UW SPF: The University of Washington Semantic Parsing Framework. arXiv preprint arXiv:1311.3011. Chitta Baral, Juraj Dzifcak, Marcos Alvarez Gonzalez, and Jiayu Zhou. 2011. Using inverse λ and generalization to translate english to formal languages. In Proceedings of the Ninth International Conference on Computational Semantics, pages 35–44. Association for Computational Linguistics. Chitta Baral, Juraj Dzifcak, Marcos Alvarez Gonzalez, and Aaron Gottesman. 2012. Typed answer set programming lambda calculus theories and correctness of inverse lambda algorithms with respect to them. TPLP, 12(4-5):775–791. Alonzo Church. 1936. An Unsolvable Problem of Elementary Number Theory. American Journal of Mathematics, 58(2):345–363, April. James Curran, Stephen Clark, and Johan Bos. 2007. Linguistically Motivated Large-Scale NLP with C&C and Boxer. In Proceedings of the 45th Annual Meeting of the Association for Computational Linguistics Companion Volume Proceedings of the Demo and Poster Sessions, pages 33–36, Prague, Czech Republic, June. Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higherorder unification. In Proceedings of the 2010 conference on empirical methods in natural language processing, pages 1223–1233. Association for Computational Linguistics. Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2011. Lexical generalization in ccg grammar induction for semantic parsing. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1512–1523. Association for Computational Linguistics. Yuliya Lierler and Peter Sch¨uller. 2012. Parsing combinatory categorial grammar via planning in answer set programming. In Correct Reasoning, pages 436– 453. Springer. Richard Montague. 1974. English as a Formal Language. In Richmond H. Thomason, editor, Formal Philosophy: Selected Papers of Richard Montague, pages 188–222. Yale University Press, New Haven, London. Richard Socher, John Bauer, Christopher D. Manning, and Andrew Y. Ng. 2013. Parsing with Compositional Vector Grammars. In ACL (1), pages 455– 465. Mark Steedman. 2000. The syntactic process, volume 35. MIT Press. Lappoon R Tang and Raymond J Mooney. 2001. Using multiple clause constructors in inductive logic programming for semantic parsing. In Machine Learning: ECML 2001, pages 466–477. Springer. Kristina Toutanova, Dan Klein, Christopher D. Manning, and Yoram Singer. 2003. Feature-rich part-ofspeech tagging with a cyclic dependency network. In Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology - Volume 1. Yuk Wah Wong and Raymond J Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Annual MeetingAssociation for computational Linguistics, volume 45, page 960. Citeseer. Luke S. Zettlemoyer and Michael Collins. 2005. Learning to Map Sentences to Logical Form: Structured Classification with Probabilistic Categorial Grammars. In UAI, pages 658–666. AUAI Press. Luke S Zettlemoyer and Michael Collins. 2007. Online learning of relaxed CCG grammars for parsing to logical form. In In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-2007). 908
2015
87
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 909–919, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Multiple Many-to-Many Sequence Alignment for Combining String-Valued Variables: A G2P Experiment Steffen Eger Text Technology Lab Goethe University Frankfurt am Main Frankfurt am Main, Germany [email protected] Abstract We investigate multiple many-to-many alignments as a primary step in integrating supplemental information strings in string transduction. Besides outlining DP based solutions to the multiple alignment problem, we detail an approximation of the problem in terms of multiple sequence segmentations satisfying a coupling constraint. We apply our approach to boosting baseline G2P systems using homogeneous as well as heterogeneous sources of supplemental information. 1 Introduction String-to-string translation (string transduction) is the problem of converting one string x over an alphabet Σ into another string y over a possibly different alphabet Γ. The most prominent applications of string-to-string translation in natural language processing (NLP) are graphemeto-phoneme conversion, in which x is a letterstring and y is a string of phonemes, transliteration (Sherif and Kondrak, 2007), lemmatization (Dreyer et al., 2008), and spelling error correction (Brill and Moore, 2000). The classical learning paradigm in each of these settings is to train a model on pairs of strings {(x, y)} and then to evaluate model performance on test data. Thereby, all state-of-the-art modelings we are aware of (e.g., (Jiampojamarn et al., 2007; Bisani and Ney, 2008; Jiampojamarn et al., 2008; Jiampojamarn et al., 2010; Novak et al., 2012)) proceed by first aligning the string pairs (x, y) in the training data. Also, these modelings acknowledge that alignments may typically be of a rather complex nature in which several x sequence ph oe n i x f i n I ks Table 1: Sample monotone many-to-many alignment between x = phoenix and y = finIks. characters may be matched up with several y sequence characters; Table 1 illustrates. Once the training data is aligned, since x and y sequences are then segmented into equal number of segments, string-to-string translation may be seen as a sequence labeling (tagging) problem in which x (sub-)sequence characters are observed variables and y (sub-)sequence characters are hidden states (Jiampojamarn et al., 2007; Jiampojamarn et al., 2010). In this work, we extend the problem of classical string-to-string translation by assuming that, at training time, we have available (M + 2)-tuples of strings {(x, ˆy(1), . . . , ˆy(M), y)}, where x is the input string, ˆy(m), for 1 ≤m ≤M, are supplemental information strings, and y is the desired output string; at test time, we wish to predict y from (x, ˆy(1), . . . , ˆy(M)). Generally, we may think of ˆy(1), . . . , ˆy(M) as arbitrary strings over arbitrary alphabets Σ(m), for 1 ≤m ≤M. For example, x might be a letter-string and ˆy(m) might be a transliteration of x in language Lm (cf. Bhargava and Kondrak (2012)). Alternatively, and this is our model scenario in the current work, x might be a letter input string and ˆy(m) might be the predicted string of phonemes, given x, produced by an (offline) system Tm. This situation is outlined in Table 3. In the table, we also illustrate a multiple (monotone) many-to-many alignment of (x, ˆy(1), . . . , ˆy(M), y). By this, we mean an alignment where (1) subsequences of all M +2 strings may be matched up with each other (many909 to-many alignments), and where (2) the matching up of subsequences obeys monotonicity. Note that such a multiple alignment generalizes classical monotone many-to-many alignments between pairs of strings, as shown in Table 1. Furthermore, such an alignment may apparently be quite useful. For instance, while none of the strings ˆy(m) in the table equals the true phonetic transcription y of x, taking a position-wise majority vote of the multiple alignment of (ˆy(1), . . . , ˆy(M)) yields y. Moreover, analogously as in the case of pairs of aligned strings, we may perceive the so extended stringto-string translation problem as a sequence labeling task once (x, ˆy(1), . . . , ˆy(M), y) are multiply aligned, but now, with additional observed variables (or features), namely, (sub-)sequence characters of each string ˆy(m). To further motivate our approach, consider the situation of training a new G2P system on the basis of, e.g., Combilex (Richmond et al., 2009). For each letter form in its database, Combilex provides a corresponding phonetic transcription. Now, suppose that, in addition, we can poll an external knowledge source such as Wiktionary for (its) phonetic transcriptions of the respective Combilex letter words as outlined in Table 2. The cenInput form Wiktionary Combilex neutrino nju:tôi:noU nutrinF wooded wUdId wUd@d wrench ôEnúS rEn< Table 2: Input letter words, Wiktionary and Combilex transcriptions. tral question we want to answer is: can we train a system using this additional information which performs better than the ‘baseline’ system that ignores the extra information? Clearly, a system with more information should not perform worse than a system with less information (unless the additional information is highly noisy), but it is a priori not clear at all how the extra information can be included, as Bhargava and Kondrak (2012) note: output predictions may be in distinct alphabets and/or follow different conventions, and simple rule-based conversions may even deteriorate a baseline system’s performance. Their solution to the problem is to let the baseline system output its n-best phonetic transcriptions, and then to re-rank these n-best predictions via an SVM reranker trained on the supplemental representations x = schizo s ch i z o ˆy(1) = skaIz@U s k aI z @U ˆy(2) = saIz@U s aI z @U ˆy(3) = skIts@ s k I ts @ ˆy(4) = Sits@U S i ts @U ˆy(5) = skIts@ s k I ts @ y = skIts@U s k I ts @U Table 3: Left: Input string x, predictions of 5 systems, and output string y. Right: A multiple many-to-many alignment of (x, ˆy(1), . . . , ˆy(5), y). Skips are marked by a dash (‘-’). (see their figure 2). Our approach is much different from this: we character (or substring) align the supplemental information strings with the input letter strings and then sequentially transduce input character substrings as in the standard G2P approach, but where the sequential transducer is aware of the corresponding subsequences of the supplemental information strings. Our goals in the current work are first, in Section 2, to formally introduce the multiple manyto-many alignment problem, which, to our knowledge, has not yet been formally considered, and to indicate how it can be solved (by standard extensions of well-known DP recursions). Secondly, we outline an ‘approximation algorithm’, also in Section 2, with much better runtime complexity, to solving the multiple many-to-many alignment problem. This proceeds by optimally segmenting individual strings to align under the global constraint that the number of segments must agree across strings. Thirdly, we demonstrate experimentally, in Section 5, that multiple many-tomany alignments may be an extremely useful first step in boosting the performance of a G2P model. In particular, we show that by conjoining a base system with additional systems very high performance increases can be achieved. We also investigate the effects of using our introduced approximation algorithm instead of ‘exactly’ determining alignments. We discuss related work in Section 3, present data and systems in Section 4 and conclude in Section 6. 2 Mult. Many-to-Many Alignm. Models We now formally define the problem of multiply aligning several strings in a monotone and manyto-many alignment manner. For notational convenience, in this section, let the N strings to align be 910 denoted by w1, . . . , wN (rather than x, ˆy(m), y, etc.). Let each wn, for 1 ≤n ≤N, be an arbitrary string over some alphabet Σ(n). Let ℓn = |wn| denote the length of wn. Moreover, assume that a set S ⊆QN n=1{0, . . . , ℓn}\{0N} of allowable steps is specified, where 0N = (0, . . . , 0 | {z } N times ).1 We interpret the elements of S as follows: if (s1, s2, . . . , sN) ∈ S, then subsequences of w1 of length s1, subsequences of w2 of length s2, . . ., subsequences of wN of length sN may be matched up with each other. In other words, S defines the types of valid ‘many-to-many match-up operations’.2 While we could drop S from consideration and simply allow every possible matching up of character subsequences, it is convenient to introduce S because algorithmic complexity may then be specified in terms of S, and by choosing particular S, one may retrieve special cases otherwise considered in the literature (see next section). As indicated, for us, a multiple alignment of (w1, . . . , wN) is any scheme w1,1 w1,2 · · · w1,k w2,1 w2,2 · · · w2,k ... ... ... ... wN,1 wN,2 · · · wN,k such that (|w1,i| , . . . , |wN,i|) ∈S, for all i = 1, . . . , k, and such that wn = wn,1 · · · wn,k, for all 1 ≤n ≤N. Let AS = AS(w1, . . . , wN) denote the set of all multiple alignments of (w1, . . . , wN). For an alignment a ∈AS, denote by score(a) = f(a) the score of alignment a under alignment model f, where f : AS(w1, . . . , wN) →R. We now investigate solutions to the problem of finding the alignment with maximal score under different choices of alignment models f, i.e., we search to efficiently solve max a∈AS(w1,...,wN) f(a). (1) Unigram alignment model For our first alignment model f, we assume that f(a), for a ∈AS, is the score f(a) = k X i=1 sim1(w1,i, . . . , wN,i) (2) 1Here, Q denotes the Cartesian product of sets. 2In the case of two strings, this is sometimes denoted in the manner M-N (e.g., 3-2, 1-0), indicating that M characters of one string may be matched up with N characters of the other string. Analogously, we could write here s1-s2-s3-· · · . for a real-valued similarity function sim1 : QN n=1 Σ(n)∗→R. We call the model f in (2) a unigram model because f(a) is the sum of the similarity scores of the matched-up subsequences (w1,i, . . . , wN,i), ignoring context. Due to this independence assumption, solving maximization problem in Eq. (1) under specification (2) is straightforward via a dynamic programming (DP) recursion. To do so, define by MS,sim1(i1, i2, . . . , iN) the score of the best alignment, under alignment model f = P sim1 and set of steps S, of (w1(1 : i1), . . . , wN(1 : iN)).3 Then, MS,sim1(i1, . . . , iN) is equal to max (j1,...,jN )∈S MS,sim1(i1 −j1, . . . , iN −jN) + sim1 w(i1 −j1 + 1 : i1), . . . , w(iN −jN + 1 : jN)  . (3) This recurrence directly leads to a DP algorithm, shown in Algorithm 1, for computing the score of the best alignment of (w1, . . . , wN); the actual alignment can be found by storing pointers to the maximizing steps taken. If similarity evaluations sim1(w1,i, . . . , wN,i) are thought of as taking constant time, this algorithm’s run time is O(QN n=1 ℓn · |S|). When ℓ= ℓ1 = · · · = ℓn and |S| = ℓN −1 (‘worst case’ size of S), then the algorithm’s runtime is thus O(ℓ2N), which quickly becomes untractable as N, the number of strings to align, increases. Of course, the unigram alignment model could be generalized to an m-gram alignment model. An m-gram alignment model would exhibit worstcase runtime complexity of O(ℓ(m+1)N) under analogous DP recursions as for the unigram model. Algorithm 1 1: procedure UNIGRAM-ALIGN(w1, . . . , wN; S, sim1) 2: M(i1, . . . , iN) ← −∞ for all (i1, . . . , iN) ∈ZN 3: M(0N) ←0 4: for i1 = 0 . . . ℓ1 do 5: for · · · do 6: for iN = 0 . . . ℓN do 7: if (i1, . . . , iN) ̸= 0N then 8: M(i1, . . . , iN) ←Eq. (3) 9: return M(ℓ1, . . . , ℓN) Separable alignment models For our second model class, assume that, for any a ∈ 3We denote by x(a : b) the substring xaxa+1 · · · xb of the string x1x2 · · · xt. 911 AS(w1, . . . , wN), f(a) decomposes into f(a) = Ψ  fw1(w1,1 · · · w1,k), . . . , fwN (wN,1 · · · wN,k)  (4) for some models fw1, . . . , fwN and where Ψ : RN →R is non-decreasing in its arguments (e.g., Ψ(fw1, . . . , fwN ) = PN n=1 fwn). If f(a) decomposes in such a manner, then f(a) is called separable.4 The advantage with separable models is that we can solve the ‘subproblems’ fw1, . . . , fwN independently. Thus, in order to find optimal multiple alignments of (w1, . . . , wN) under such a specification, we would only have to find the best segmentations of sequences wn under models fwn, for 1 ≤n ≤N, subject to the constraint that the segmentations must agree in their number of segments (the coupling variable). Let Swn ⊆ {0, 1, . . . , ℓn} denote the constraints on segment lengths, similar to the interpretation of steps in S. If fwn is a unigram segmentation model then the problem of finding the best segmentation of wn with exactly j segments can be solved in time O(ℓn |Swn| j). Thus, if each fwn is a unigram segmentation model, worst-case time complexity for each subproblem would be O(ℓ3 n) (if string wn can be segmented into at most ℓn segments) and then the overall problem (1) under specification (4) is solvable in worst-case time N · O(ℓ3). More generally, if each fwn is an m-gram segmentation model, then worst-case time complexity amounts to N · O(ℓm+2). Importantly, this scales linearly with the number N of strings to align, rather than exponentially as the O(ℓ(m+1)N) under the (non-separable) m-gram alignment model discussed above. Unsupervised alignments The algorithms presented may be applied iteratively in order to induce multiple alignments in an unsupervised (EMlike) fashion in which sim1 is gradually learnt (e.g., starting from a uniform initialization of sim1). We skip details of this, as we do not make us of it in our current experiments. Rather, in our experiments below, we directly specify sim1 as a sum of pairwise similarity scores which we extract from alignments produced by an off-the-shelf pairwise aligner. 4Note the difference between Eqs. (2) and (4). While each fwn in (4) operates on a ‘row’ of an alignment scheme, sim1 in (2) acts on the ‘columns’. In other words, the unigram alignment model correlates the multiply matched-up subsequences, while the separable alignment model assumes independence here. 3 Related work Monotone alignments have a long tradition, both in NLP and bioinformatics. The classical Needleman-Wunsch algorithm (Needleman and Wunsch, 1970) computes the optimal alignment between two sequences when only single character matches, mismatches, and skips are allowed. It is a special case of the unigram model (2) in optimization problem (1) for which N = 2, S = {(1, 0), (0, 1), (1, 1)} and sim1 takes on values from {0, −1}, depending on whether compared input subsequences match or not. As is well-known, this alignment specification is equivalent to the edit distance problem (Levenshtein, 1966) in which the minimal number of insertions, deletions and substitutions is sought that transforms one string into another. Substringto-substring edit operations — or equivalently, (monotone) many-to-many alignments — have appeared in the NLP context, e.g., in (Deligne et al., 1995), (Brill and Moore, 2000), (Jiampojamarn et al., 2007), (Bisani and Ney, 2008), (Jiampojamarn et al., 2010), or, significantly earlier, in (Ukkonen, 1985), (V´eronis, 1988). Learning edit distance/monotone alignments in an unsupervised manner has been the topic of, e.g., (Ristad and Yianilos, 1998), (Cotterell et al., 2014), besides the works already mentioned. All of these approaches are special cases of our unigram model outlined in Section 2 — i.e., they consider particular S (most prominently, S = {(1, 0), (0, 1), (1, 1)}) and/or restrict attention to only N = 2 strings.5 Alignments between multiple sequences, i.e., multiple sequence alignment, has also been an issue both in NLP (e.g., Covington (1998), Bhargava and Kondrak (2009)) and bioinformatics (e.g., Durbin et al. (1998)). An interesting application of alignments of multiple sequences is to determine what has been called median string (Kohonen, 1985) or Steiner consensus string (Gusfield, 1997), defined as the string ¯s that minimizes the sum of distances, for a given distance function d(x, y), to a list of strings s1, . . . , sN (Jiang et al., 2012); typically, d is the standard edit distance. As Gusfield (1997) shows, the Steiner consensus string may be retrieved from a multiple align5In Cotterell et al. (2014), context influences alignments, so that the approach goes beyond the unigram model sketched in (2), but there, too, the focus is on the situation N = 2 and S = {(1, 0), (0, 1), (1, 1)}. 912 ment of s1, . . . , sN by concatenating the columnwise majority characters in the alignment, ignoring skips. Since median string computation (and hence also the multiple many-to-many alignment problem, as we consider) is an NP-hard problem (Sim and Park, 2003), designing approximations is an active field of research. For example, Marti and Bunke (2001) ignore part of the search space by declaring matches-up of distant characters as unlikely, and Jiang et al. (2012) apply an approximation based on string embeddings in vector spaces. Paul and Eisner (2012) apply dual decomposition to compute Steiner consensus strings. Via the approach taken in this paper, median strings may be computed in case d is a (distance) function taking substring-to-substring edit operations into account, a seemingly straightforward, yet extremely useful generalization in several NLP applications, as indicated in the introduction. Our approach may also be seen in the context of classifier combination for string-valued variables. While ensemble methods for structured prediction have been considered in several works (see, e.g., Nguyen and Guo (2007), Cortes et al. (2014), and references therein), a typical assumption in this situation is that the sequences to be combined have equal length, which clearly cannot be expected to hold when, e.g., the outputs of several G2P, transliteration, etc., systems must be combined. In fact, the multiple many-to-many alignment models investigated in this work could act as a preprocessing step in this setup, since the alignment precisely serves the functionality of segmenting the strings into equal number of segments/substructures. Of course, combining outputs with varying number of elements is also an issue in machine translation (e.g., Macherey and Och (2007), Heafield et al. (2009)), but, there, the problem is harder due to the potential non-monotonicities in the ordering of elements, which typically necessitates (additional) heuristics. One approach for constructing multiple alignments is here progressive multiple alignment (Feng and Doolittle, 1987) in which a multiple (typically one-to-one) alignment is iteratively constructed from successive pairwise alignments (Bangalore et al., 2001). Matusov et al. (2006) apply word reordering and subsequent pairwise monotone one-to-one alignments for MT system combination. 4 Data and systems 4.1 Data We conduct experiments on the General American (GA) variant of the Combilex data set (Richmond et al., 2009). This contains about 144,000 grapheme-phoneme pairs as exemplarily illustrated in Table 2. In our experiments, we split the data into two disjoint parts, one for testing (about 28,000 word pairs) and one for training/development (the remainder). 4.2 Systems BASELINE Our baseline system is a linear-chain conditional random field model (CRF)6 (Lafferty et al., 2001) which we apply in the manner indicated in the introduction: after many-to-many aligning the training data as in Table 1, at training time, we use the CRF as a tagging model that is trained to label each input character subsequence with an output character subsequence. As features for the CRF, we use all n-grams of subsequences of x that fit inside a window of size 5 centered around the current subsequence (context features). We also include linear-chain features which allow previously generated output character subsequences to influence current output character subsequences. In essence, our baseline model is a standard discriminative approach to G2P. It is, all in all, the same approach as described in Jiampojamarn et al. (2010), except that we do not include joint n-gram features. At test time, we first segment a new input string x and then apply the CRF. Thereby, we train the segmentation module on the segmented x sequences, as available from the aligned training data.7 BASELINE+X As competitors for the baseline system, we introduce systems that rely on the predictions of one or several additional (black box/offline) systems. At training time, we first multiply many-to-many align the input string x, the predictions ˆy(1), . . . , ˆy(M) and the true transcription y as illustrated in Table 3 (see Section 4.3 for details). Then, as for the baseline system, we train a CRF to label each input character 6We made use of the CRF++ package available at https://code.google.com/p/crfpp/. 7To be more precise on the training of the segmentation module, in an alignment as in Table 1, we consider the segmented x string — ph-oe-n-i-x — and then encode this segmentation in a binary string where 1’s indicate splits. Thus, segmentation becomes, again, a sequence labling task; see, e.g., Bartlett et al. (2008) or Eger (2013) for details. 913 subsequence with the corresponding output character subsequence. However, this time, the CRF has access to the subsequence suggestions (as the alignments indicate) produced by the offline systems. As features for the extended models, we additionally include context features for all predicted strings ˆy(m) (all n-grams in a window of size 3 centered around the current subsequence prediction). We also include a joint feature firing on the tuple of the current subsequence value of x, ˆy(1), . . . , ˆy(M). To illustrate, when BASELINE+X tags position 2 in the (split up) input string in Table 3, it sees that its value is ch, that the previous input position contains s, that the next contains i, that the next two contain (i,z), that the prediction of the first system at position 2 is k, that the first system’s next prediction is ai, and so forth. At test time, we first multiply many-to-many align x, ˆy(1), . . . , ˆy(M), and then apply the enhanced CRF. 4.3 Alignments To induce multiple monotone many-to-many alignments of input strings, offline system predictions and output strings, we proceed in one of two manners. Exact alignments Firstly, we specify sim1 in Eq. (2), as sim1(xi, ˆy(1) i , . . . , ˆy(M) i , yi) =  M X m=1 psim(xi, ˆy(m) i )  + psim(xi, yi), where psim is a pair-similarity function. The advantage with this specification is that the similarity of a tuple of subsequences is defined as the sum of pairwise similarity scores, which we can directly estimate from pairwise alignments of (x, ˆy(m)) that an off-the-shelf pairwise aligner can produce (we use the Phonetisaurus aligner for this). We set psim(u, v) as log-probability of observing the tuple (u, v) in the training data of pairwise aligned sequences. To illustrate, we define the similarity of (o,@U,@U,@,@U,@,@U) in the example in Table 3 as the pairwise similarity of (o,@U) (as inferred from pairwise alignments of x strings and system 1 transcriptions) plus the pairwise similarity of (o,@U) (as inferred from pairwise alignments of x strings and system 2 transcriptions), etc. At test time, we use the same procedure but drop the term psim(xi, yi) when inducing alignments. For our current purposes, we label the outlined modus as exact (alignment) modus. Approx. alignments Secondly, we derive the optimal multiple many-to-many alignment of the strings in question by choosing an alignment that satisfies the condition that (1) each individual string x, ˆy(1), . . . , ˆy(M), y is optimally segmented (e.g., ph-oe-n-i-x rather than pho-eni-x, f-i-n-I-ks rather than f-inIk-s) subject to the global constraint that (2) the number of segments must agree across the strings to align. This constitutes a separable alignment model as discussed in Section 2, and thus has much lower runtime complexity as the first model. Segmentation models can be directly learned from the pairwise alignments that Phonetisaurus produces by focusing on either the segmented x or y/ˆy(m) sequences; we choose to implement bigram individual segmentation models. This second model type may be considered an approximation of the first, since in a good alignment, we would not only expect individually good segmentations and agreement of segment numbers but also that subsegments are likely correlations of each other, precisely as our first model type captures. Therefore, we shall call this alignment modus approximate (alignment) modus, for our present purposes. 5 Experiments We now describe two sets of experiments, a controlled experiment on the Combilex data set where we can design our offline/black box systems ourselves and where the black box systems are trained on a similar distribution as the baseline and the extended baseline systems. In particular, the black box systems operate on the same output alphabet as the extended baseline systems, which constitutes an ‘ideal’ situation. Thereafter, we investigate how our extended baseline system performs in a ‘real-world’ scenario: we train a system on Combilex that has as supplemental information corresponding Wiktionary (and PTE, as explained below) transcriptions. Throughout, we use as accuracy measures for all our systems word accuray (WACC). Word accuracy is defined as the number of correctly transcribed strings among all transcribed strings in a test sample. WACC is a strict measure that penalizes even tiny deviations from the gold-standard transcriptions, but has nowadays become standard in G2P. 914 5.1 A controlled experiment In our first set of experiments, we let our offline/black box systems be the Sequitur G2P modeling toolkit (Bisani and Ney, 2008) (S) and the Phonetisaurus modeling toolkit (Novak et al., 2012) (P). We train them on disjoint sets of 20,000 grapheme-to-phoneme Combilex string pairs each. The performance of these two systems, on the test set of size 28,000, is indicated in Table 4. Next, we train BASELINE on disPhonetisaurus Sequitur WACC 72.12 71.70 Table 4: Word-accuracy (in %) on the test data, for the two systems indicated. joint sets (disjoint from both the training sets of P and S) of size 2,000, 5,000, 10,000 and 20,000. Making BASELINE’s training sets disjoint from the training sets of the offline systems is both realistic (since a black box system would typically follow a partially distinct distribution from one’s own training set distribution) and also prevents the extended baseline systems from fully adapting to the predictions of either P or S, whose training set accuracy is an upward biased representation of their true accuracy. As baseline extensions, we consider the systems BASELINE+P (+P), and BASELINE+P+S (+P+S).8 Results are shown in Figures 1 and 2. We see that conjoining the base system with the predictions of the offline Phonetisaurus and Sequitur models substantially increases the baseline WACC, especially in the case of little training data. In fact, WACC increases here by almost 100% when the baseline system is complemented by ˆy(P) and ˆy(S). As training set size increases, differences become less and less pronounced. Eventually, we would expect them to drop to zero, since beyond some training set size, the additional features may provide no new information.9 We also note that conjoining the two systems is more valuable than conjoining only one system, and, in Figure 2, that the models which are based on exact multiple alignments outperform the models based on approximate alignments, but not 8We omit BASELINE+S since it yielded similar results as BASELINE+P. 9In fact, in follow-up work, we find that the additional information may also confuse the base system when training set sizes are large enough. by a wide margin. 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0 5T 10T 20T 30T Accuracy Training set size BASELINE +P +P+S Figure 1: WACC as a function of training set size for the system indicated. Exact align. modus. 0.67 0.68 0.69 0.7 0.71 0.72 0.73 0.74 0.75 0.76 0 5T 10T 20T 30T Accuracy Training set size +P +P+S +PAPRX +P+SAPRX Figure 2: Comparison of models based on exact and approximate alignments; WACC as a function of training set size. APRX denotes the approximation alignment model. Concerning differences in alignments between the two alignment types, exact vs. approximate, an illustrative example where the approximate model fails and the exact model does not is (‘false’ alignment based on the approximate model indicated): r ee n t e r e d r i E n t @‘ r d r i E n t @‘ r d which nicely captures the inability of the approximate model to account for correlations between the matched-up subsequences. That is, while the segmentations of the three shown sequences appear acceptable, a matching of graphemic t with 915 phonemic n, etc., seems quite unlikely. Still, it is very promising to see that these differences in alignment quality translate into very small differences in overall string-to-string translation model performance, as Figure 2 outlines. Namely, differences in WACC are typically on the level of 1% or less (always in favor of the exact alignment model). This is a very important finding, as it indicates that string-to-string translation need not be (severely) negatively impacted by switching to the approximate alignment model, a tractable alternative to the exact models, which quickly become practically infeasible as the number of strings to align increases. 5.2 Real-world experiments To test whether our approach may also succeed in a ‘real-world setting’, we use as offline/black box systems GA Wiktionary transcriptions of our input forms as well as PhotoTransEdit (PTE) transcriptions,10 a lexicon-based G2P system which offers both GA and RP (received pronunciation) transcription of English strings. We train and test on input strings for which both Combilex and PTE transcriptions are available, and for which both Combilex and Wiktionary transcriptions are available.11 Test set sizes are about 1,500 in the case of PTE and 3,500 in the case of Wiktionary. We only test here the performance of the exact alignment method, noting that, as before, approximate alignments produced slightly weaker results. Clearly, Wiktionary and PTE differ from the Combilex data. First, both Wiktionary and PTE use different numbers of phonemic symbols than Combilex, as Table 5 illustrates. Some differences Dataset |Σ| Combilex 54 WiktionaryGA 107 WiktionaryRP 116 PTEGA 44 PTERP 57 Table 5: Sizes of phonetic inventaries of different data sets. arise from the fact that, e.g., lengthening of vowels is indicated by two output letters in some data sets 10Downloadable from http://www.photransedit.com/. 11This yields a clear method of comparison. An alternative would be to provide predictions for missing transcriptions. In any case, by our task definition, all systems must provide a hypothesis for an input string. and only one in others. Also, phonemic transcription conventions differ, as becomes most strikingly evident in the case of RP vs. GA transcriptions — Table 6 illustrates. Finally, Wiktionary has many more phonetic symbols than the other datasets, a finding that we attribute to its crowd-sourced nature and lacking of normalization. Despite these differences in phonemic annotation standards between Combilex, Wiktionary and PTE, we observe that conjoining input strings with predicted Wiktionary or PTE transcriptions via multiple alignments leads to very good improvements in WACC over only using the input string as information source. Indeed, as shown in Table 7, for PTE, WACC increases by as much as 80% in case of small training sample (1,099 string pairs) and as much as 37% in case of medium-sized training sample (2,687 string pairs). Thus, comparing with the previous situation of homogenous systems, we also observe that the gain from including heterogeneous system is relatively weaker, as we would expect due to distinct underlying assumptions, but still impressive. Performance increases when including Wiktionary are slightly lower, most likely because it constitutes a very heterogenous source of phonetic transcriptions with user-idiosyncratic annotations (however, training set sizes are also different).12 BASEL. BASEL.+PTEGA BASEL.+PTERP 1,099 31.34 56.47 50.22 2,687 45.75 60.80 62.80 BASEL. BASEL.+WikGA BASEL.+WikRP 2,000 38.44 60.71 62.18 5,000 51.69 65.81 65.96 10,000 58.97 67.30 68.66 Table 7: Top: WACC in % for baseline CRF model and the models that integrate PTE in the GA versions and RP versions, respectively. Bottom: BASELINE and BASELINE+Wiktionary. 6 Conclusion We have generalized the task description of string transduction to include supplemental information strings. Moreover, we have suggested multiple 12To provide, for the interested reader, a comparison with Phonetisaurus and Sequitur: for the Wiktionary GA data, performance of Phonetisaurus is 41.80% (training set size 2,000), 55.70% (5,000) and 62.47% (10,000). Respective numbers for Sequitur are 40.58%, 54.84%, and 61.58%. On PTE, results are, similarly, slightly higher than our baseline, but substantially lower than the extended baseline. 916 b o t ch i ng b o t S I N b A tS I N b a rr ed b a d b A r d a s th m a t i c s æ s m æ t I k s a z 0 m a t I k s Table 6: Multiple alignments of input string, predicted PTE transcription and true (Combilex) transcription. Differences may be due to alternative phonemic conventions (e.g., Combilex has a single phonemic character representing the sound tS) and/or due to differences in pronunciation in GA and RP, resp. many-to-many alignments — and a subsequent standardly extended discriminative approach — for solving string transduction (here, G2P) in this generalized setup. We have shown that, in a realworld setting, our approach may significantly beat a standard discriminative baseline, e.g., when we add Wiktionary transcriptions or predictions of a rule-based system as additional information to the input strings. The appeal of this approach lies in the fact that almost any sort of external knowledge source may be integrated to improve the performance of a baseline system. For example, supplemental information strings may appear in the form of transliterations of an input string in other languages; they may be predictions of other G2P systems, whether carefully manually crafted or learnt from data; they might even appear in the form of phonetic transcriptions of the input string in other dialects or languages. What distinguishes our solution to integrating supplemental information strings in string transduction settings from other research (e.g., (Bhargava and Kondrak, 2011; Bhargava and Kondrak, 2012)) is that rather than integrating systems on the global level of strings, we integrate them on the local level of smaller units, namely, substrings appropriated to the domain of application (e.g., in our context, phonemes/grapheme substructures). Both approaches may be considered complementary. Finally, another important contribution of our work is to outline an ‘approximation algorithm’ to inducing multiple many-to-many alignments of strings, which is otherwise an NP-hard problem for which (most likely) no efficient exact solutions exist, and to investigate its suitability for the problem task. In particular, we have seen that exact alignments lead to better overall model performance, but that the margin over the approximation is not wide. The scope for future research of our modeling is huge: multiple many-to-many alignments may be useful in aligning cognates in linguistic research; they may be the first necessary step for many other ensemble techniques in string transduction as we have considered (Cortes et al., 2014), and they may allow, on a large scale, to boost G2P (transliteration, lemmatization, etc.) systems by integrating them with many traditional (or modern) knowledge resources such as rule- and dictionarybased lemmatizers, crowd-sourced phonetic transcriptions (e.g., based on Wiktionary), etc., with the outlook of significantly outperforming current state-of-the-art models which are based solely on input string information. Finally, we note that we have thus far shown that supplemental information strings may be beneficial in case of overall little training data and that improvements decrease with data size. Further investigating this relationship will be of importance. Morevoer, it will be insightful to compare the exact and approximate alignment algorithms presented here with other (heuristic) alignment methods, such as iterative pairwise alignments as employed in machine translation, and to investigate how alignment quality of multiple strings impacts overall G2P performance in the setup of additional information strings. References S. Bangalore, G. Bodel, and G. Riccardi. 2001. Computing consensus translation from multiple machine translation systems. In In Proceedings of IEEE Automatic Speech Recognition and Understanding Workshop (ASRU-2001, pages 351–354. Susan Bartlett, Grzegorz Kondrak, and Colin Cherry. 2008. Automatic syllabification with structured svms for letter-to-phoneme conversion. In Kathleen McKeown, Johanna D. Moore, Simone Teufel, James Allan, and Sadaoki Furui, editors, ACL, pages 568–576. The Association for Computer Linguistics. Aditya Bhargava and Grzegorz Kondrak. 2009. Multiple word alignment with Profile Hidden Markov Models. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Student Research Workshop and Doctoral Consortium, pages 917 43–48, Boulder, Colorado, June. Association for Computational Linguistics. Aditya Bhargava and Grzegorz Kondrak. 2011. How do you pronounce your name?: Improving g2p with transliterations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, HLT ’11, pages 399–408, Stroudsburg, PA, USA. Association for Computational Linguistics. Aditya Bhargava and Grzegorz Kondrak. 2012. Leveraging supplemental representations for sequential transduction. In HLT-NAACL, pages 396–406. The Association for Computational Linguistics. Maximilian Bisani and Hermann Ney. 2008. Jointsequence models for grapheme-to-phoneme conversion. Speech Communication, 50(5):434–451. Eric Brill and Robert C. Moore. 2000. An improved error model for noisy channel spelling correction. In Proceedings of the 38th Annual Meeting on Association for Computational Linguistics, ACL ’00, pages 286–293, Stroudsburg, PA, USA. Association for Computational Linguistics. Corinna Cortes, Vitaly Kuznetsov, and Mehryar Mohri. 2014. Ensemble methods for structured prediction. In Proceedings of the 31th International Conference on Machine Learning, ICML 2014, Beijing, China, 21-26 June 2014, pages 1134–1142. Ryan Cotterell, Nanyun Peng, and Jason Eisner. 2014. Stochastic contextual edit distance and probabilistic FSTs. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL), Baltimore, June. 6 pages. Michael A. Covington. 1998. Alignment of multiple languages for historical comparison. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics, Volume 1, pages 275–279, Montreal, Quebec, Canada, August. Association for Computational Linguistics. Sabine Deligne, Franois Yvon, and Fr´ed´eric Bimbot. 1995. Variable-length sequence matching for phonetic transcription using joint multigrams. In EUROSPEECH. ISCA. Markus Dreyer, Jason Smith, and Jason Eisner. 2008. Latent-variable modeling of string transductions with finite-state methods. In EMNLP, pages 1080– 1089. ACL. Richard Durbin, Sean R. Eddy, Anders Krogh, and Graeme Mitchison. 1998. Biological Sequence Analysis: Probabilistic Models of Proteins and Nucleic Acids. Cambridge University Press. Steffen Eger. 2013. Sequence segmentation by enumeration: An exploration. Prague Bull. Math. Linguistics, 100:113–132. D. F. Feng and R. F. Doolittle. 1987. Progressive sequence alignment as a prerequisite to correct phylogenetic trees. Journal of molecular evolution, 25(4):351–360. Dan Gusfield. 1997. Algorithms on Strings, Trees, and Sequences - Computer Science and Computational Biology. Cambridge University Press. Kenneth Heafield, Greg Hanneman, and Alon Lavie. 2009. Machine translation system combination with flexible word ordering. In Proceedings of the EACL 2009 Fourth Workshop on Statistical Machine Translation, pages 56–60, Athens, Greece, March. Sittichai Jiampojamarn, Grzegorz Kondrak, and Tarek Sherif. 2007. Applying many-to-many alignments and hidden markov models to letter-to-phoneme conversion. In Human Language Technologies 2007: The Conference of the North American Chapter of the Association for Computational Linguistics; Proceedings of the Main Conference, pages 372– 379, Rochester, New York, April. Association for Computational Linguistics. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2008. Joint processing and discriminative training for letter-to-phoneme conversion. In Proceedings of ACL-08: HLT, pages 905–913, Columbus, Ohio, June. Association for Computational Linguistics. Sittichai Jiampojamarn, Colin Cherry, and Grzegorz Kondrak. 2010. Integrating joint n-gram features into a discriminative training framework. In HLTNAACL, pages 697–700. The Association for Computational Linguistics. Xiaoyi Jiang, Jran Wentker, and Miquel Ferrer. 2012. Generalized median string computation by means of string embedding in vector spaces. Pattern Recognition Letters, 33(7):842–852. T. Kohonen. 1985. Median strings. Pattern Recognition Letters, 3:309–313. John Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282–289. Morgan Kaufmann, San Francisco, CA. VI Levenshtein. 1966. Binary Codes Capable of Correcting Deletions, Insertions and Reversals. Soviet Physics Doklady, 10:707. Wolfgang Macherey and Franz Josef Och. 2007. An empirical study on computing consensus translations from multiple machine translation systems. In EMNLP-CoNLL, pages 986–995. ACL. Urs-Viktor Marti and Horst Bunke. 2001. Use of positional information in sequence alignment for multiple classifier combination. In Josef Kittler and Fabio Roli, editors, Multiple Classifier Systems, volume 918 2096 of Lecture Notes in Computer Science, pages 388–398. Springer. Evgeny Matusov, Nicola Ueffing, and Hermann Ney. 2006. Computing consensus translation from multiple machine translation systems using enhanced hypotheses alignment. In Conference of the European Chapter of the Association for Computational Linguistics, pages 33–40, Trento, Italy, April. Saul B. Needleman and Christian D. Wunsch. 1970. A general method applicable to the search for similarities in the amino acid sequence of two proteins. Journal of Molecular Biology, 48(3):443– 453, March. Nam Nguyen and Yunsong Guo. 2007. Comparisons of sequence labeling algorithms and extensions. In Zoubin Ghahramani, editor, ICML, volume 227 of ACM International Conference Proceeding Series, pages 681–688. ACM. Josef R. Novak, Nobuaki Minematsu, and Keikichi Hirose. 2012. WFST-based grapheme-to-phoneme conversion: Open source tools for alignment, model-building and decoding. In Proceedings of the 10th International Workshop on Finite State Methods and Natural Language Processing, pages 45–49, Donostia–San Sebastin, July. Association for Computational Linguistics. Michael J. Paul and Jason Eisner. 2012. Implicitly intersecting weighted automata using dual decomposition. In HLT-NAACL, pages 232–242. The Association for Computational Linguistics. Korin Richmond, Robert A. J. Clark, and Susan Fitt. 2009. Robust LTS rules with the Combilex speech technology lexicon. In INTERSPEECH, pages 1295–1298. ISCA. Eric Sven Ristad and Peter N. Yianilos. 1998. Learning string-edit distance. IEEE Trans. Pattern Anal. Mach. Intell., 20(5):522–532. Tarek Sherif and Grzegorz Kondrak. 2007. Substringbased transliteration. In John A. Carroll, Antal van den Bosch, and Annie Zaenen, editors, ACL. The Association for Computational Linguistics. Jeong Seop Sim and Kunsoo Park. 2003. The consensus string problem for a metric is np-complete. J. of Discrete Algorithms, 1(1):111–117, February. Esko Ukkonen. 1985. Algorithms for approximate string matching. Information and Control, 64:100– 118. Jean V´eronis. 1988. Computerized correction of phonographic errors. Computers and the Humanities, 22(1):43–56. 919
2015
88
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 920–928, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Tweet Normalization with Syllables Ke Xu School of Software Eng. Beijing U. of Posts & Telecom. Beijing 100876, China [email protected] Yunqing Xia STCA Microsoft Beijing 100084, China [email protected] Chin-Hui Lee School of Electr. & Comp. Eng. Georgia Institute of Technology Atlanta, GA 30332-0250, USA [email protected] Abstract In this paper, we propose a syllable-based method for tweet normalization to study the cognitive process of non-standard word creation in social media. Assuming that syllable plays a fundamental role in forming the non-standard tweet words, we choose syllable as the basic unit and extend the conventional noisy channel model by incorporating the syllables to represent the word-to-word transitions at both word and syllable levels. The syllables are used in our method not only to suggest more candidates, but also to measure similarity between words. Novelty of this work is three-fold: First, to the best of our knowledge, this is an early attempt to explore syllables in tweet normalization. Second, our proposed normalization method relies on unlabeled samples, making it much easier to adapt our method to handle non-standard words in any period of history. And third, we conduct a series of experiments and prove that the proposed method is advantageous over the state-of-art solutions for tweet normalization. 1 Introduction Due to the casual nature of social media, there exists a large number of non-standard words in text expressions which make it substantially different from formal written text. It is reported in (Liu et al., 2011) that more than 4 million distinct out-of-vocabulary (OOV) tokens are found in the Edinburgh Twitter corpus (Petrovic et al., 2010). This variation poses challenges when performing natural language processing (NLP) tasks (Sproat et al., 2001) based on such texts. Tweet normalization, aiming at converting these OOV non-standard words into their in-vocabulary (IV) formal forms, is therefore viewed as a very important pre-processing task. Researchers focus their studies in tweet normalization at different levels. A character-level tagging system is used in (Pennell and Liu, 2010) to solve deletion-based abbreviation. It was further extended in (Liu et al., 2012) using more characters instead of Y or N as labels. The character-level machine translation (MT) approach (Pennell and Liu, 2011) was modified in (Li and Liu, 2012a) into character-block. While a string edit distance method was introduced in (Contractor et al., 2010) to represent word-level similarity, and this orthographical feature has been adopted in (Han and Baldwin, 2011), and (Yang and Eisenstein, 2013). Challenges are encountered in these different levels of tweet normalization. In the characterlevel sequential labeling systems, features are required for every character and their combinations, leading to much more noise into the later reverse table look-up process (Liu et al., 2012). In the character-block level MT systems equal number of blocks and their corresponding phonetic symbols are required for alignment (Li and Liu, 2012b). This strict restriction can result in a great difficulty in training set construction and a loss of useful information. Finally, word-level normalization methods cannot properly model how non-standard words are formed, and some patterns or consistencies within words can be omitted and altered. We observe the cognitive process that, given non-standard words like tmr, people tend to first segment them into syllables like t-m-r. Then they will find the corresponding standard word with syllables like to-mor-row. Inspired by this cognitive observation, we propose a syllable based tweet normalization method, in which nonstandard words are first segmented into syllables. Since we cannot predict the writers deterministic intention in using tmr as a segmentation of tm-r 920 (representing tim-er) or t-m-r (representing to-mor-row), every possible segmentation form is considered. Then we represent similarity of standard syllables and non-standard syllables using an exponential potential function. After every transition probabilities of standard syllable and non-standard syllable are assigned, we then use noisy channel model and Viterbi decoder to search for the most possible standard candidate in each tweet sentence. Our empirical study reveals that syllable is a proper level for tweet normalization. The syllable is similar to character-block but it represents phonetic features naturally because every word is pronounced with syllables. Our syllable-based tweet normalization method utilizes effective features of both character- and word-level: (1) Like characterlevel, it can capture more detailed information about how non-standard words are generated; (2) Similar to word-level, it reduces a large amount of noisy candidates. Instead of using domain-specific resources, our method makes good use of standard words to extract linguistic features. This makes our method extendable to new normalization tasks or domains. The rest of this paper is organized as follows: previous work in tweet normalization are reviewed and discussed in Section 2. Our approach is presented in Section 3. In Section 4 and Section 5, we provide implementation details and results. Then we make some analysis of the results in Section 6. This work is finally concluded in Section 7. 2 Related Work Non-standard words exhibit different forms and change rapidly, but people can still figure out their original standard words. To properly model this human ability, researchers are studying what remain unchanged under this dynamic characteristic. Human normalization of an non-standard word can be as follows: After realizing the word is non-standard, people usually first figure out standard candidate words in various manners. Then they replace the non-standard words with the standard candidates in the sentence to check whether the sentence can carry a meaning. If not, they switch to a different candidate until a good one is found. Most normalization methods in existence follow the same procedure: candidates are first generated, and then put into the sentence to check whether a reasonable sentence can be formed. Differences lie in how the candidates are generated and weighted. Related work can be classified into three groups. 2.1 Orthographical similarity Orthographical similarity is built upon the assumption that the non-standard words look like its standard counterparts, leading to a high Longest Common Sequence (LCS) and low Edit Distance (ED). This method is widely used in spell checker, in which the LCS and ED scores are calculated for weighting possible candidates. However, problems are that the correct word cannot always be the most looked like one. Taking the nonstandard word nite for example, note looks more likely than the correct form night. To overcome this problem, an exception dictionary of strongly-associated word pairs are constructed in (Gouws et al., 2011). Further, these pairs are added into a unified log-linear model in (Yang and Eisenstein, 2013) and Monte Carlo sampling techniques are used to estimate parameters. 2.2 Phonetic similarity The assumption underlying the phonetic similarity is that during transition, non-standard words sound like the standard counterparts, thus the pronunciation of non-standard words can be traced back to a standard dictionary. The challenge is the algorithm to annotate pronunciation of the nonstandard words. Double Metaphone algorithm (Philips, 2000) is used to decode pronunciation and then to represent phonetic similarity by edit distance of these transcripts (Han and Baldwin, 2011). IPA symbols are utilized in (Li and Liu, 2012b) to represent sound of words and then word alignment-based machine translation is applied to generate possible pronunciation of non-standard words. And also, phoneme is used in (Liu et al., 2012) as one kind of features to train their CRF model. 2.3 Contextual similarity It is accepted that after standard words are transformed into non-standard words, the meaning of a sentence remains unchanged. So the normalized standard word must carry a meaning. Most researchers use n-gram language model to normalize a sentence, and several researches use more contextual information. For example, training pairs are generated in (Liu et al., 2012) by a 921 cosine contextual similarity formula whose items are defined by TF-IDF scheme. A bipartite graph is constructed in (Hassan and Menezes, 2013) to represent tokens (both non-standard and standard words) and their context. Thus, random walks on the graph can represent contextual-similarity between non-standard and standard words. Very recently, word-embedding (Mikolov et al., 2010; Mikolov et al., 2013) is utilized in (Li and Liu, 2014) to represent more complex contextual relationship. In word-to-word candidate selection, most researches use orthographical similarity and phonetic similarity separately. In the log-linear model (Yang and Eisenstein, 2013), edit distance is modeled as major feature. In the character- and phonebased approaches (Li and Liu, 2012b), orthographical information and phonetic information were treated separately to generate candidates. In (Han and Baldwin, 2011), candidates from lexical edit distance and phonemic edit distance are merged together. Then an up to 16% increasing recall was reported when adding candidates from phonetic measure. But improper processing level makes it difficult to model the two types of information simultaneously: (1) Single character can hardly reflect orthographical features of one word. (2) As fine-grained reasonable restrictions are lacked, as showed in (Han and Baldwin, 2011), several times of candidates are included when adding phonetic candidates and this will bring much more noise. To combine orthographical and phonetic measure in a fine-grained level, we proposed the syllable-level approach. 3 Approach 3.1 Framework The framework of the proposed tweet normalization method is presented in Figure 1. The proposed method extends the basic HMM channel model (Choudhury et al., 2007; Cook and Stevenson, 2009) into syllable level. And the following four characteristics are very intersting. (1) Combination: When reading a sentence, fast subvocalization will occur in our mind. In the process, some non-standard words generated by phonetic substitution are correctly pronounced and then normalized. And also, because subvocalization is fast, people tend to ignore some minor flaws in spelling intentionally or unintentionally. As this often occurs in people’s real-life interacting with these social media language, we believe the combination of phonetic and orthographical information is of great significance. (2) Syllable level: Inspired by Chinese normalization (Xia et al., 2006) using pinyin (phonetic transcripts of Chinese), syllable can be seen as basic unit when processing pronunciation. Different from mono-syllable Chinese words, English words can be multi-syllable; this will bring changes in our method that extra layers of syllables must be put into consideration. Thus, apart from word-based noisy-channel model, we extend it into a syllable-level framework. (3) Priori knowledge: Priori knowledge is acquired from standard words, meaning that both standard syllabification and pronunciation can shed some lights to non-standard words. This assumption makes it possible to obtain non-standard syllables by standard syllabification and gain pronunciation of syllables by standard words and rules generated with them. (4) General patterns: Social media language changes rapidly while labeled data is expensive thus limited. To effectively solve the problem, linguistic features instead of statistical features should be emphasized. We exploit standard words of their syllables, pronunciation and possible transition patterns and proposed the four-layer HMM-based model (see Figure 1). In our method, non-standard words ci are first segmented into syllables sc(1) i . . . sc(k) i , and for standard syllable sw(j) i mapping to non-standard syllable sw(j) i , we calculate their similarity by combining the orthographical and phonetic measures. Standard syllables sw(1) i . . . sw(k) i make up one standard candidates. Since candidates are generated and weighted, we can use Viterbi decoder to perform sentence normalization. Table 1 shows some possible candidates for the nonstandard word tmr. 3.2 Method We extend the noisy channel model to syllablelevel as follows: 922 Formal words Formal word syllables Informal word syllables Informal words Figure 1: Framework of the propose tweet normalization method. bw = argmax p(w|c) = argmax p(c|w) × p(w) = argmax p(⃗sc| ⃗sw) × p( ⃗sw), (1) where w indicates the standard word and c the non-standard word, and sw and sc represent their syllabic form, respectively. To simplify the problem, we restrict the number of standard syllables equals to the number of non-standard syllables in our method. Assuming that syllables are independent of each other in transforming, we obtain: p(⃗sc| ⃗sw) = k Y j=1 p(scj|swj). (2) For syllable similarity, we use an exponential potential function to combine orthographical distance and phonetic distance. Because pronunciation can be represented using letter-to-phone transcripts, we can treat string similarity of these tmr t-mr tm-r t-m-r tamer ta-mer tim-er to-mor-row ti-mor tim-ber tri-mes-ter ti-more ton-er tor-men-tor tu-mor tem-per ta-ma-ra . . . . . . . . . Table 1: Standard candidates of tmr in syllable level. The first row gives the different segmentations and the second row presents the candidates. transcripts as phonetic similarity. Thus the syllable similarity can be calculated as follows. p(scj|swj, λ) = Φ(scj, swj) Z(swj) (3) Z(swj) = X scj Φ(scj, swj) (4) Φ(sc, sw) = exp(λ(LCS(sc, sw) −ED(sc, sw)) +(1 −λ)(PLCS(sc, sw) −PED(sc, sw))) (5) Exponential function grows tremendously as its argument increases, so much more weight can be assigned if syllables are more similar. The parameter λ here is used to empirically adjust relative contribution of letters and sounds. Longest common sequence (LCS) and edit distance (ED) are used to measure orthographical similarity, while phonetic longest common sequence (PLCS) and phonetic edit distant (PED) are used to measure phonetic similarity but based on letter-to-sound transcripts. The PLCS are defined as basic LCS but PED here is slightly different. When performing phonetic similarity calculation based on syllables, we follow (Xia et al., 2006) in treating consonant and vowels separately because transition of consonants can make a totally different pronunciation. So if consonants of scj and swj are exactly the same or fit rules listed in Table 2, PED(scj, swj) equals to edit 923 Description Rules Examples 1. -ng as suffix: g-dropping -n/-ng do-in/do-ing, go-in/go-ing, talk-in/talk-ing, mak-in/mak-ing 2. -ng as suffix: n-dropping -g/-ng tak-ig/tak-ing, likig/lik-ing 3. suffix: z/s equaling -z/-s, -s/-z jamz/james, plz/please 4. suffix: n/m equaling -m/-n, -n/-m in-portant/im-portant, get-tim/get-ting 5. suffix: t/d equaling -t/-d, -d/-t shid/shit, shult/should 6. suffix: t-dropping -/-t jus/just, wha/what, mus/must, ain/ain’t 7. suffix: r-dropping -/-r holla/holler, t-m-r/tomorrow 8. prefix: th-/d- equaling d-/th-, th-/dde/the, dat/that, dats/that’s, dey/they Table 2: The consonant rules. distance of letter-to-phone transcripts, or it will be assigned infinity to indicate that their pronunciation are so different that this transition can seldom happen. For example, as consonantal transition between suffix z and s can always happen, PED(plz,please) equals string edit distance of their transcripts. But as consonatal transition of f and d is rare, phonetic distance of fly and sky is assigned infinity. Note the consonant rules in Table 2 are manually defined in our empirical study, which represent the most commonly used ones. 3.3 Parameter Parameter in the proposed method is only the λ in Equation (5), which represents the relative contribution of orthographical similarity and phonetic similarity. Because the limited number of annotated corpus, we have to enumerate the parameter in {0, 0.1, 0.2, ..., 1} in the experiment to find the optimal setting. 4 Implementation The method described in the previous section are implemented with the following details. 4.1 Preprocessing Before performing normalization, we need to process several types of non-standard words: • Words containing numbers: People usually substitute some kind of sounds with numbers like 4/four, 2/two and 8/eight or numbers can be replacement of some letters like 1/i, 4/a. So we replace numbers with its words or characters and then use them to generate possible candidates. • Words with repeating letters: As our method is syllable-based, repeating letters for sentiment expressing (like cooool, (Brody and Diakopoulos, 2011)) can cause syllabifying failure. For repeating letters, we reduce it to both two and one to generate candidate separately. Then the two lists are merged together to form the whole candidate list. 4.2 Letter-to-sound conversion Syllable in this work refers to orthographic syllables. For example, we convert word tomorrow into to-mor-row. However, when comparing the syllable of a standard word and that of a nonstandard word, sound (i.e., phones) of the syllables are considered. Thus letter-to-sound conversion tools are required. Several TTS system can perform the task according to some linguistic rules, even for nonstandard words. The Double Metaphone algorithm used in (Han and Baldwin, 2011) is one of them. But it uses consonants to encode a word, which gives less information than we need. In our method, we use freeTTS (Walker et al., 2002) with CMU lexicon1 to transform words into APRAbet2 symbols. For example, word tomorrow is transcribed to {T-UW M-AA R-OW} and tmr to {T M R}. 4.3 Dictionary preparation • Dictionary #1: In-vocabulary (IV) words Following (Yang and Eisenstein, 2013), our set of IV words is also based on the GNU aspell dictionary (v0.60.6). Differently, we use a collection of 100 million tweets (roughly the same size of Edinburgh Twitter corpus) because the Edinburgh Twitter corpus is no longer available due to Twitter policies. The 1http://www.speech.cs.cmu.edu/cgi-bin/cmudict 2http://en.wikipedia.org/wiki/Arpabet 924 final IV dictionary contains 51,948 standard words. • Dictionary #2: Syllables for the standard words Following (Pennell and Liu, 2010), we use the online dictionary3 to extract syllables for each standard words. We encountered same problem when accessing words with prefixes or suffixes, which are not syllabified in the same format as the base words on the website. To address the issue, we simply regard these prefixes and suffixes as syllables. • Dictionary #3: Pronunciation of the syllables Using the CMU pronouncing dictionary (Weide, 1998) and dictionary 2, and knowing all possible APRAbet symbol for all consonant characters, we can program to capture every possible pronunciation of all syllables in the standard dictionary. 4.4 Automatic syllabification of non-standard words Automatic syllabification of non-standard words is a supervised problem. A straightforward idea is to train a CRF model on manually labeled syllables of non-standard words. Unfortunately, such a corpus is not available and very expensive to produce. We assume that both standard and non-standard forms follow the same syllable rules (i.e., the cognitive process). Thus we propose to train the CRF model on the corpus of syllables of standard words (which is easy to obtain) to construct an automatic annotation system based on CRF++ (Kudo, 2005). In this work, we extract syllables of standard words from Dictionary #2 as training set. Annotations follow (Pennell and Liu, 2010) to identify boundaries of syllables and in our work, CRF++ can suggest several candidate solutions, rather than an optimal segmentation solution for syllable segmentation of the non-standard words. In the HMM channel model, the candidate solutions are included as part of the search space. 4.5 Language model Using Tweets from our corpus that contain no OOV words besides hashtags and username mentions (following (Han and Baldwin, 2011)), the 3http://www.dictionary.com Kneser-Ney smoothed tri-gram language model is estimated using SRILM toolkit (Stolcke, 2002). Note that punctuations, hashtags, and username mentions have some syntactic value (Kaufmann and Kalita, 2010) to some extent, we replace them with ’<PUNCT>’, ’<TOPIC>’ and ’<USER>’. 5 Evaluation 5.1 Datasets We use two labeled twitter datasets in existence to evaluate our tweet normalization method. • LexNorm1.1 contains 549 complete tweets with 1184 non-standard tokens (558 unique word type) (Han and Baldwin, 2011). • LexNorm1.2 is a revised version of LexNorm1.1 (Yang and Eisenstein, 2013). Some inconsistencies and errors in LexNorm1.1 are corrected and some more non-standard words are properly recovered. In both datasets, to-be-normalized non-standard words are detected manually as well as the corresponding standard words. 5.2 Evaluation criteria Here we use precision, recall and F-score to evaluate our method. As normalization methods on these datasets focused on the labeled nonstandard words (Yang and Eisenstein, 2013), recall is the proportion of words requiring normalization which are normalized correctly; precision is the proportion of normalizations which are correct. When we perform the tweet normalization methods, every error is both a false positive and false negative, so in the task, precision equals to recall. 5.3 Sentence level normalization We choose the following prior normalization methods: • (Liu et al., 2012): the extended characterlevel CRF tagging system; • (Yang and Eisenstein, 2013): log-linear model using string edit distance and longest common sequence measures as major features; • (Hassan and Menezes, 2013): bipartite graph major exploit contextual similarity; 925 Method Dataset Precision Recall F-measure (Han and Baldwin, 2011) LexNorm 1.1 75.30 75.30 75.30 (Liu et al., 2012) 84.13 78.38 81.15 (Hassan and Menezes, 2013) 85.37 56.4 69.93 (Yang and Eisenstein, 2013) 82.09 82.09 82.09 Syllable-based method 85.30 85.30 85.30 (Yang and Eisenstein, 2013) LexNorm 1.2 82.06 82.06 82.06 Syllable-based method 86.08 86.08 86.08 Table 3: Experiment results of the tweet normalization methods. • (Han and Baldwin, 2011): the orthographyphone combined system using lexical edit distance and phonemic edit distance. In our method, we set λ=0.7 because it is found best in our experiments (see Figure 2). The experimental results are presented in Table 3, which indicate that our method outperforms the state-of-the-art methods. Details on how to adjust parameter is given in Section 5.4. Recall we argue that combination of three similarity is necessary when performing sentence-level normalization. Apart from contextual similarity like language model or graphic model, methods in (Yang and Eisenstein, 2013) or (Hassan and Menezes, 2013) do not include phonetic measure, causing loss of important phonetic information. Though using phoneme, morpheme boundary and syllable boundary as features (Liu et al., 2012), the character-level reversed approach will bring much more noise into the later reversed look-up table, and also, features of whole word are omitted. Like (Han and Baldwin, 2011), we also use lexical measure and phonetic measure. Great difference between the two approaches is the processing level: word level and syllable level. In their work, average candidates number suffers times of increase when adding phonetic measure. This is because when introducing phonemic edit distance, important pronunciations can be altered (phonemic edit distance of night-need and night-kite is equal). Syllable level allows us to reflect consistencies during transition in a finergrained level. Thus the phonetic similarity can be more precisely modeled. 5.4 Contributions of phone and orthography In our method, the parameter λ in Equation 5 is used to represent the relatively contributions of both phonetic and orthographical information. But as the lack of prior knowledge, we cannot judge an optimal λ. We choose to conduct experiments varying λ = {0, 0.1, ..., 1} to find out how this adjustment can affect performance. The experimental results are presented in Figure 2. 0.8 0.81 0.82 0.83 0.84 0.85 0.86 0.87 0 0.2 0.4 0.6 0.8 1 F-measure λ LexNorm1.1 LexNorm1.2 Figure 2: Contribution of phone and orthography. As shown in Figure 2, when λ is set 0 or 1 (indicating no contribution of either orthographical or phonetic in assigning weight to candidates), our method performs much worse. In our experiment, when λ = 0.7, the models performs best, showing that orthographical measure makes relatively more contribution over phonetic measure, but the latter is indispensable. This justifies the effectiveness of combining orthographical and phonetic measure, indicating that human normalization process is properly modeled. 6 Analysis 6.1 Our exceptions Deeper observation of our normalization results shows that there are several types of exceptions beyond our consonant-based rules. For example, thanks fails to be selected as a candidate for the non-standard word thx because the pronunciation of thanks contains an N but thx does not. The same situation happens when we process stong/strong because of the lacking R. We 926 believe some more consonant should be exploited and more precisely described. 6.2 Non-standard words involving multiple syllables There are one type of transition that we cannot solve like acc/accelerate and bio/biology because the mapping is between single-syllable word and multi-syllable word. We add possible standard syllable sw(i) 0 and sw(i) k+1 to the head and tail of origin syllables, but this extended form failed to be assigned high probability because the string edit distances are too large. We leave this problem for further research. 6.3 Annotation issue Though similar, our results of LexNorm1.2 is better than LexNorm1.1. After scrutinizing, we notice that several issues in LexNorm1.1 are fixed in LexNorm1.2. So our results like meh/me (meaning the non-standard word meh are corrected to me) in LexNorm1.1 is wrong but in LexNorm1.2 is right. Even in LexNorm1.2, there exist some inconsistencies and errors. For example, our result buyed/bought is wrong for both datasets, which is actually correct. For another example, til is normalized to until in some cases but to till in other cases. We show that the LexNorm test corpus is still imperfect. We appeal for systematic efforts to produce a standard dataset under a widely-accepted guideline. 6.4 Conventions Social media language often contains words that are culture-specific and widely used in daily life. Some word like congrats, tv and pic are included into several dictionaries. We also observed several transitions like atl/atlanta or wx/weather in the datasets. These kinds of conventional abbreviations pose great difficulty to us. Normalization of those conventional nonstandard words still needs further study. 7 Conclusion In this paper, a syllable-based tweet normalization method is proposed for social media text normalization. Results on publicly available standard datasets justify our assumption that syllable plays a fundamental role in social media non-standard words. Advantage of our proposed method lies in that syllable is viewed as the basic processing unit and syllable-level similarity. This accords to the human cognition in creating and understanding the social non-standard words. Our method is domain independent. It is robust on non-standard words in any period of history. Furthermore, give the syllable transcription tool, our method can be easily adapted to a new language. Acknowledgement This research work was carried out when the authors worked at Tsinghua University. We acknowledge the financial support from Natural Science Foundation of China (NSFC: 61272233, 61373056, 61433018). We thank the anonymous reviewers for the insightful comments. References Samuel Brody and Nicholas Diakopoulos. 2011. Cooooooooooooooollllllllllllll!!!!!!!!!!!!!! using word lengthening to detect sentiment in microblogs. In EMNLP, pages 562–570. ACL. Monojit Choudhury, Rahul Saraf, Vijit Jain, Animesh Mukherjee, Sudeshna Sarkar, and Anupam Basu. 2007. Investigation and modeling of the structure of texting language. International Journal of Document Analysis and Recognition (IJDAR), 10(34):157–174. Danish Contractor, Tanveer A. Faruquie, and L. Venkata Subramaniam. 2010. Unsupervised cleansing of noisy text. In Chu-Ren Huang and Dan Jurafsky, editors, COLING (Posters), pages 189–196. Chinese Information Processing Society of China. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization. In CALC ’09: Proceedings of the Workshop on Computational Approaches to Linguistic Creativity, pages 71–78, Morristown, NJ, USA. Association for Computational Linguistics. Stephan Gouws, Dirk Hovy, and Donald Metzler. 2011. Unsupervised mining of lexical variants from noisy text. In Proceedings of the First workshop on Unsupervised Learning in NLP, pages 82–90. Association for Computational Linguistics. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Dekang Lin, Yuji Matsumoto, and Rada Mihalcea, editors, ACL, pages 368–378. The Association for Computer Linguistics. Hany Hassan and Arul Menezes. 2013. Social text normalization using contextual graph random walks. In ACL (1), pages 1577–1586. The Association for Computer Linguistics. 927 Max Kaufmann and Jugal Kalita. 2010. Syntactic normalization of Twitter messages. In International conference on natural language processing, Kharagpur, India. Taku Kudo. 2005. Crf++: Yet another crf toolkit. Software available at http://crfpp. sourceforge. net. Chen Li and Yang Liu. 2012a. Improving text normalization using character-blocks based models and system combination. In COLING 2012, 24th International Conference on Computational Linguistics, Proceedings of the Conference: Technical Papers, 8-15 December 2012, Mumbai, India, pages 1587–1602. Chen Li and Yang Liu. 2012b. Normalization of text messages using character- and phone-based machine translation approaches. In INTERSPEECH. ISCA. Chen Li and Yang Liu. 2014. Improving text normalization via unsupervised model and discriminative reranking. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, ACL 2014, June 22-27, 2014, Baltimore, MD, USA, Student Research Workshop, pages 86– 93. Fei Liu, Fuliang Weng, Bingqing Wang, and Yang Liu. 2011. Insertion, deletion, or substitution?: normalizing text messages without precategorization nor supervision. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies: short papers-Volume 2, pages 71–76. Association for Computational Linguistics. Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broad-coverage normalization system for social media language. In In Proceedings of ACL: Long Papers-Volume 1, pages 1035–1044. Association for Computational Linguistics. Tom´aˇs Mikolov, Martin Karafi´at, Luk Burget, Jan ernock, and Sanjeev Khudanpur. 2010. Recurrent neural network based language model. In Interspeech, pages 1045–1048. Tom´aˇs Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. CoRR, abs/1301.3781. Deana Pennell and Yang Liu. 2010. Normalization of text messages for text-to-speech. In ICASSP, pages 4842–4845. IEEE. Deana Pennell and Yang Liu. 2011. A character-level machine translation approach for normalization of sms abbreviations. In IJCNLP, pages 974–982. S. Petrovic, M. Osborne, and V. Lavrenko. 2010. The edinburgh twitter corpus. In Proceedings of the NAACL HLT Workshop on Computational Linguistics in a World of Social Media, pages 25– 26. Lawrence Philips. 2000. The double metaphone search algorithm. C/C++ Users Journal, 18(5), June. Richard Sproat, Alan W. Black, Stanley F. Chen, Shankar Kumar, Mari Ostendorf, and Christopher Richards. 2001. Normalization of non-standard words. Computer Speech & Language, 15(3):287– 333. Andreas Stolcke. 2002. Srilm-an extensible language modeling toolkit. In Proceedings International Conference on Spoken Language Processing, pages 257–286, November. Willie Walker, Paul Lamere, and Philip Kwok. 2002. Freetts: a performance case study. Robert L Weide. 1998. The cmu pronouncing dictionary. URL: http://www. speech. cs. cmu. edu/cgibin/cmudict. Yunqing Xia, Kam-Fai Wong, and Wenjie Li. 2006. A phonetic-based approach to chinese chat text normalization. In Nicoletta Calzolari, Claire Cardie, and Pierre Isabelle, editors, ACL. The Association for Computer Linguistics. Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In EMNLP, pages 61–72. ACL. 928
2015
89
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 84–94, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Semantically Smooth Knowledge Graph Embedding Shu Guo†, Quan Wang†∗, Bin Wang†, Lihong Wang‡, Li Guo† †Institute of Information Engineering, Chinese Academy of Sciences, Beijing 100093, China {guoshu,wangquan,wangbin,guoli}@iie.ac.cn ‡National Computer Network Emergency Response Technical Team Coordination Center of China, Beijing 100029, China [email protected] Abstract This paper considers the problem of embedding Knowledge Graphs (KGs) consisting of entities and relations into lowdimensional vector spaces. Most of the existing methods perform this task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. In this paper, aiming at further discovering the intrinsic geometric structure of the embedding space, we propose Semantically Smooth Embedding (SSE). The key idea of SSE is to take full advantage of additional semantic information and enforce the embedding space to be semantically smooth, i.e., entities belonging to the same semantic category will lie close to each other in the embedding space. Two manifold learning algorithms Laplacian Eigenmaps and Locally Linear Embedding are used to model the smoothness assumption. Both are formulated as geometrically based regularization terms to constrain the embedding task. We empirically evaluate SSE in two benchmark tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-of-the-art methods. Furthermore, SSE is a general framework. The smoothness assumption can be imposed to a wide variety of embedding models, and it can also be constructed using other information besides entities’ semantic categories. 1 Introduction Knowledge Graphs (KGs) like WordNet (Miller, 1995), Freebase (Bollacker et al., 2008), and DB∗Corresponding author: Quan Wang. pedia (Lehmann et al., 2014) have become extremely useful resources for many NLP related applications, such as word sense disambiguation (Agirre et al., 2014), named entity recognition (Magnini et al., 2002), and information extraction (Hoffmann et al., 2011). A KG is a multirelational directed graph composed of entities as nodes and relations as edges. Each edge is represented as a triple of fact ⟨ei, rk, e j⟩, indicating that head entity ei and tail entity e j are connected by relation rk. Although powerful in representing structured data, the underlying symbolic nature makes KGs hard to manipulate. Recently a new research direction called knowledge graph embedding has attracted much attention (Socher et al., 2013; Bordes et al., 2013; Bordes et al., 2014; Lin et al., 2015). It attempts to embed components of a KG into continuous vector spaces, so as to simplify the manipulation while preserving the inherent structure of the original graph. Specifically, given a KG, entities and relations are first represented in a low-dimensional vector space, and for each triple, a scoring function is defined to measure its plausibility in that space. Then the representations of entities and relations (i.e. embeddings) are learned by maximizing the total plausibility of observed triples. The learned embeddings can further be used to benefit all kinds of tasks, such as KG completion (Socher et al., 2013; Bordes et al., 2013), relation extraction (Riedel et al., 2013; Weston et al., 2013), and entity resolution (Bordes et al., 2014). To our knowledge, most of existing KG embedding methods perform the embedding task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. In this paper we propose Semantically Smooth Embedding (SSE), a new approach which further imposes constraints on the geometric structure of the embedding space. The key idea of SSE is to make ful84 l use of additional semantic information (i.e. semantic categories of entities) and enforce the embedding space to be semantically smooth—entities belonging to the same semantic category should lie close to each other in the embedding space. This smoothness assumption is closely related to the local invariance assumption exploited in manifold learning theory, which requires nearby points to have similar embeddings or labels (Belkin and Niyogi, 2001). Thus we employ two manifold learning algorithms Laplacian Eigenmaps (Belkin and Niyogi, 2001) and Locally Linear Embedding (Roweis and Saul, 2000) to model the smoothness assumption. The former requires an entity to lie close to every other entity in the same category, while the latter represents that entity as a linear combination of its nearest neighbors (i.e. entities within the same category). Both are formulated as manifold regularization terms to constrain the KG embedding objective function. As such, SSE obtains an embedding space which is semantically smooth and at the same time compatible with observed facts. The advantages of SSE are two-fold: 1) By imposing the smoothness assumption, SSE successfully captures the semantic correlation between entities, which exists intrinsically but is overlooked in previous work on KG embedding. 2) KGs are typically very sparse, containing a relatively small number of facts compared to the large number of entities and relations. SSE can effectively deal with data sparsity by leveraging additional semantic information. Both aspects lead to more accurate embeddings in SSE. Moreover, our approach is quite general. The smoothness assumption can actually be imposed to a wide variety of KG embedding models. Besides semantic categories, other information (e.g. entity similarities specified by users or derived from auxiliary data sources) can also be used to construct the manifold regularization terms. And besides KG embedding, similar smoothness assumptions can also be applied in other embedding tasks (e.g. word embedding and sentence embedding). Our main contributions can be summarized as follows. First, we devise a novel KG embedding framework that naturally requires the embedding space to be semantically smooth. As far as we know, it is the first work that imposes constraints on the geometric structure of the embedding space during KG embedding. By leveraging additional semantic information, our approach can also deal with the data sparsity issue that commonly exists in typical KGs. Second, we evaluate our approach in two benchmark tasks of link prediction and triple classification, and achieve significant and consistent improvements over state-ofthe-art models. In the remainder of this paper, we first provide a brief review of existing KG embedding models in Section 2, and then detail the proposed SSE framework in Section 3. Experiments and results are reported in Section 4. Then in Section 5 we discuss related work, followed by the conclusion and future work in Section 6. 2 A Brief Review of KG Embedding KG embedding aims to embed entities and relations into a continuous vector space and model the plausibility of each fact in that space. In general, it consists of three steps: 1) representing entities and relations, 2) specifying a scoring function, and 3) learning the latent representations. In the first step, given a KG, entities are represented as points (i.e. vectors) in a continuous vector space, and relations as operators in that space, which can be characterized by vectors (Bordes et al., 2013; Bordes et al., 2014; Wang et al., 2014b), matrices (Bordes et al., 2011; Jenatton et al., 2012), or tensors (Socher et al., 2013). In the second step, for each candidate fact ⟨ei, rk, e j⟩, an energy function f(ei, rk, e j) is further defined to measure its plausibility, with the corresponding entity and relation representations as variables. Plausible triples are assumed to have low energies. Then in the third step, to obtain the entity and relation representations, a marginbased ranking loss, i.e., L= ∑ t+∈O ∑ t−∈Nt+ [ γ+ f(ei, rk, e j)−f(e′ i, rk, e′ j) ] + , (1) is minimized. Here, O is the set of observed (i.e. positive) triples, and t+ = ⟨ei, rk, e j⟩∈O; Nt+ denotes the set of negative triples constructed by replacing entities in t+, and t−= ⟨e′ i, rk, e′ j⟩∈Nt+; γ > 0 is a margin separating positive and negative triples; and [x]+ = max(0, x). The ranking loss favors lower energies for positive triples than for negative ones. Stochastic gradient descent (in mini-batch mode) is adopted to solve the minimization problem. For details please refer to (Bordes et al., 2013) and references therein. Different embedding models differ in the first two steps: entity/relation representation and energy 85 Method Entity/Relation embeddings Energy function TransE (Bordes et al., 2013) e, r ∈Rd f(ei, rk, e j) = ∥ei + rk −e j∥ℓ1/ℓ2 SME (lin) (Bordes et al., 2014) e, r ∈Rd f(ei, rk, e j) = (Wu1rk + Wu2ei + bu)T ( Wv1rk + Wv2e j + bv ) SME (bilin) (Bordes et al., 2014) e, r ∈Rd f(ei, rk, e j) = (( Wu ¯×3rk ) ei + bu )T (( Wv ¯×3rk ) e j + bv ) SE (Bordes et al., 2011) e ∈Rd, Ru, Rv ∈Rd×d f(ei, rk, e j) = ∥Ru kei −Rv ke j∥ℓ1 Table 1: Existing KG embedding models. function definition. Three state-of-the-art embedding models, namely TransE (Bordes et al., 2013), SME (Bordes et al., 2014), and SE (Bordes et al., 2011), are detailed below. Please refer to (Jenatton et al., 2012; Socher et al., 2013; Wang et al., 2014b; Lin et al., 2015) for other methods. TransE (Bordes et al., 2013) represents both entities and relations as vectors in the embedding space. For a given triple ⟨ei, rk, e j⟩, the relation is interpreted as a translation vector rk so that the embedded entities ei and e j can be connected by rk with low error. The energy function is defined as f(ei, rk, e j) = ∥ei + rk −e j∥ℓ1/ℓ2, where ∥·∥ℓ1/ℓ2 denotes the ℓ1-norm or ℓ2-norm. SME (Bordes et al., 2014) also represents entities and relations as vectors, but models triples in a more expressive way. Given a triple ⟨ei, rk, e j⟩, it first employs a function gu (·, ·) to combine rk and ei, and gv (·, ·) to combine rk and e j. Then, the energy function is defined as matching gu (·, ·) and gv (·, ·) by their dot product, i.e., f(ei, rk, e j) = gu(rk, ei)Tgv(rk, e j). There are two versions of SME, linear and bilinear (denoted as SME (lin) and SME (bilin) respectively), obtained by defining different gu (·, ·) and gv (·, ·). SE (Bordes et al., 2011) represents entities as vectors but relations as matrices. Each relation is modeled by a left matrix Ru k and a right matrix Rv k, acting as independent projections to head and tail entities respectively. If a triple ⟨ei, rk, e j⟩holds, Ru kei and Rv ke j should be close to each other. The energy function is f(ei, rk, ej) = ∥Ru kei −Rv kej∥ℓ1. Table 1 summarizes the entity/relation representations and energy functions used in these models. 3 Semantically Smooth Embedding The methods introduced above perform the embedding task based solely on observed facts. The only requirement is that the learned embeddings should be compatible within each individual fact. However, they fail to discover the intrinsic geometric structure of the embedding space. To deal with this limitation, we introduce Semantically Smooth Embedding (SSE) which constrains the embedding task by incorporating geometrically based regularization terms, constructed by using additional semantic categories of entities. 3.1 Problem Formulation Suppose we are given a KG consisting of n entities and m relations. The facts observed are stored as a set of triples O = { ⟨ei, rk, e j⟩ } . A triple ⟨ei, rk, e j⟩ indicates that entity ei and entity e j are connected by relation rk. In addition, the entities are classified into multiple semantic categories. Each entity e is associated with a label ce indicating the category to which it belongs. SSE aims to embed the entities and relations into a continuous vector space which is compatible with the observed facts, and at the same time semantically smooth. To make the embedding space compatible with the observed facts, we make use of the triple set O and follow the same strategy adopted in previous methods. That is, we define an energy function on each candidate triple (e.g. the energy functions listed in Table 1), and require observed triples to have lower energies than unobserved ones (i.e. the margin-based ranking loss defined in Eq. (1)). To make the embedding space semantically smooth, we further leverage the entity category information {ce}, and assume that entities within the same semantic category should lie close to each other in the embedding space. This smoothness assumption is similar to the local invariance assumption exploited in manifold learning theory (i.e. nearby points are likely to have similar embeddings or labels). So we employ two manifold learning algorithms Laplacian Eigenmaps (Belkin and Niyogi, 2001) and Locally Linear Embedding (Roweis and Saul, 2000) to model such semantic smoothness, termed as LE and LLE for short respectively. 3.2 Modeling Semantic Smoothness by LE Laplacian Eigenmaps (LE) is a manifold learning algorithm that preserves local invariance between 86 each two data points (Belkin and Niyogi, 2001). We borrow the idea of LE and enforce semantic smoothness by assuming: Smoothness Assumption 1 If two entities ei and e j belong to the same semantic category, they will have embeddings ei and e j close to each other. To encode the semantic information, we construct an adjacency matrix W1 ∈Rn×n among the entities, with the ij-th entry defined as: w(1) i j =  1, if cei = ce j, 0, otherwise, where cei/cej is the category label of entity ei/e j. Then, we use the following term to measure the smoothness of the embedding space: R1 = 1 2 n ∑ i=1 n ∑ j=1 ∥ei −e j∥2 2w(1) ij , where ei and e j are the embeddings of entities ei and e j respectively. By minimizing R1, we expect Smoothness Assumption 1: if two entities ei and e j belong to the same semantic category (i.e. w(1) ij = 1), the distance between ei and e j (i.e. ∥ei −ej∥2 2) should be small. We further incorporate R1 as a regularization term into the margin-based ranking loss (i.e. Eq. (1)) adopted in previous KG embedding methods, and propose our first SSE model. The new model performs the embedding task by minimizing the following objective function: L1 = 1 N ∑ t+∈O ∑ t−∈Nt+ ℓ(t+, t−)+ λ1 2 n ∑ i=1 n ∑ j=1 ∥ei −e j∥2 2w(1) ij , where ℓ(t+, t−) = [ γ+ f(ei, rk, e j)−f(e′ i, rk, e′ j) ] + is the ranking loss on the positive-negative triple pair (t+, t−), and N is the total number of such triple pairs. The first term in L1 enforces the resultant embedding space compatible with all the observed triples, and the second term further requires that space to be semantically smooth. Hyperparameter λ1 makes a trade-offbetween the two cases. The minimization is carried out by stochastic gradient descent. Given a randomly sampled positive triple t+ = ⟨ei, rk, e j⟩and the associated negative triple t−= ⟨e′ i, rk, e′ j⟩,1 the stochastic gradient w.r.t. es (s ∈{i, j, i′, j′}) can be calculated as: ∇esL1 = ∇esℓ(t+, t−) + 2λ1E (D −W1) 1s, 1The negative triple is constructed by replacing one of the entities in the positive triple. where E = [e1, e2, · · · , en] ∈Rd×n is a matrix consisting of entity embeddings; D ∈Rn×n is a diagonal matrix with the i-th entry on the diagonal being dii = ∑n j=1 w(1) ij ; and 1s ∈Rn is a column vector where the s-th entry is 1 and the others are 0. Other parameters are not included in R1, and their gradients remain the same as defined in previous work. 3.3 Modeling Semantic Smoothness by LLE As opposed to LE which preserves local invariance within data pairs, Locally Linear Embedding (LLE) expects each data point to be roughly reconstructed by a linear combination of its nearest neighbors (Roweis and Saul, 2000). We borrow the idea of LLE and enforce semantic smoothness by assuming: Smoothness Assumption 2 Each entity ei can be roughly reconstructed by a linear combination of its nearest neighbors in the embedding space, i.e., ei ≈∑ ej∈N(ei) αje j. Here nearest neighbors refer to entities belonging to the same semantic category with ei. To model this assumption, for each entity ei, we randomly sample K entities uniformly from the category to which ei belongs, denoted as the nearest neighbor set N (ei). We construct a weight matrix W2 ∈Rn×n by defining: w(2) ij =  1, if e j ∈N (ei) , 0, otherwise, and normalize the rows so that ∑n j=1 w(2) ij = 1 for each row i. Note that W2 is no longer a symmetric matrix. The smoothness of the embedding space can be measured by the reconstruction error: R2 = n ∑ i=1 ei −∑ ej∈N(ei) w(2) ij e j 2 2 . Minimizing R2 results in Smoothness Assumption 2: each entity can be linearly reconstructed from its nearest neighbors with low error. By incorporating R2 as a regularization term into the margin-based ranking loss defined in Eq. (1), we obtain our second SSE model, which performs the embedding task by minimizing: L2 = 1 N ∑ t+∈O ∑ t−∈Nt+ ℓ(t+, t−)+λ2 n ∑ i=1 ei −∑ e j∈N(ei) w(2) ij e j 2 2 . 87 The resultant embedding space is also semantically smooth and compatible with the observed triples. Hyperparameter λ2 makes a trade-offbetween the two cases. Similar to the first model, stochastic gradient descent is used to solve the minimization problem. Given a positive triple t+ = ⟨ei, rk, e j⟩and the associated negative triple t−= ⟨e′ i, rk, e′ j⟩, the gradient w.r.t. es (s ∈{i, j, i′, j′}) is calculated as: ∇esL2 = ∇esℓ(t+, t−)+2λ2E (I −W2)T (I −W2) 1s, where I ∈Rn×n is the identity matrix. Other parameters are not included in R2, and their gradients remain the same as defined in previous work. To better capture the cohesion within each category, during each stochastic step we resample the nearest neighbors for each entity, uniformly from the category to which it belongs. 3.4 Advantages and Extensions The advantages of our approach can be summarized as follows: 1) By incorporating geometrically based regularization terms, the SSE models are able to capture the semantic correlation between entities, which exists intrinsically but is overlooked in previous work. 2) By leveraging additional entity category information, the SSE models can deal with the data sparsity issue that commonly exists in most KGs. Both aspects lead to more accurate embeddings. Entity category information has also been investigated in (Nickel et al., 2012; Chang et al., 2014; Wang et al., 2015), but in different manners. Nickel et al. (2012) take categories as pseudo entities and introduce a specific relation to link entities to categories. Chang et al. (2014) and Wang et al. (2015) use entity categories to specify relations’ argument expectations, removing invalid triples during training and reasoning respectively. None of them considers the intrinsic geometric structure of the embedding space. Actually, our approach is quite general. 1) The smoothness assumptions can be imposed to a wide variety of KG embedding models, not only the ones introduced in Section 2, but also those based on matrix/tensor factorization (Nickel et al., 2011; Chang et al., 2013). 2) Besides semantic categories, other information (e.g. entity similarities specified by users or derived from auxiliary data sources) can also be used to construct the manifold regularization terms. 3) Besides KG embedding, similar smoothness assumptions can also be Location Sport CityCapitalOfCountry AthleteLedSportTeam CityLocatedInCountry AthletePlaysForTeam CityLocatedInGeopoliticallocation AthletePlaysInLeague CityLocatedInState AthletePlaysSport CountryLocatedInGeopoliticallocation CoachesInLeague StateHasCapital CoachesTeam StateLocatedInCountry TeamPlaysInLeague StateLocatedInGeopoliticallocation TeamPlaysSport Table 2: Relations in Locationand Sport. applied in other embedding tasks (e.g. word embedding and sentence embedding). 4 Experiments We empirically evaluate the proposed SSE models in two tasks: link prediction (Bordes et al., 2013) and triple classification (Socher et al., 2013). 4.1 Data Sets We create three data sets with different sizes using NELL (Carlson et al., 2010): Location, Sport, and Nell186. Locationand Sportare two small-scale data sets, both containing 8 relations on the topics of “location” and “sport” respectively. The corresponding relations are listed in Table 2. Nell186 is a larger data set containing the most frequent 186 relations. On all the data sets, entities appearing only once are removed. We extract the entity category information from a specific relation called Generalization, and keep non-overlapping categories.2 Categories containing less than 5 entities on Locationand Sportas well as categories containing less than 50 entities on Nell186 are further removed. Table 3 gives some statistics of the three data sets, where # Rel./# Ent./# Trip./# Cat. denotes the number of relations/entities/observed triples/categories respectively, and # c-Ent. denotes the number of entities that have category labels. Note that our SSE models do not require every entity to have a category label. From the statistics, we can see that all the three data sets suffer from the data sparsity issue, containing a relatively small number of observed triples compared to the number of entities. On the two small-scale data sets Locationand Sport, triples are split into training/validation/test sets, with the ratio of 3:1:1. The first set is used for modeling training, the second for hyperparameter tuning, and the third for evaluation. All experiments are repeated 5 times by drawing new 2If two categories overlap, the smaller one is discarded. 88 # Rel. # Ent. # Trip. # Cat. # c-Ent. Location 8 380 718 5 358 Sport 8 1,520 3,826 4 1,506 Nell186 186 14,463 41,134 35 8,590 Table 3: Statistics of data sets. training/validation/test splits, and results averaged over the 5 rounds are reported. On Nell186 experiments are conducted only once, using a training/validation/test split with 31,134/5,000/5,000 triples respectively. We will release the data upon request. 4.2 Link Prediction This task is to complete a triple ⟨ei, rk, e j⟩with ei or e j missing, i.e., predict ei given (rk, e j) or predict e j given (ei, rk). Baseline methods. We take TransE, SME (lin), SME (bilin), and SE as our baselines. We then incorporate manifold regularization terms into these methods to obtain the SSE models. A model with the LE/LLE regularization term is denoted as TransE-LE/TransE-LLE for example. We further compare our SSE models with the setting proposed by Nickel et al. (2012), which also takes into account the entity category information, but in a more direct manner. That is, given an entity e with its category label ce, we create a new triple ⟨e, Generalization, ce⟩and add it into the training set. Such a method is denoted as TransE-Cat for example. Evaluation protocol. For evaluation, we adopt the same ranking procedure proposed by Bordes et al. (2013). For each test triple ⟨ei, rk, e j⟩, the head entity ei is replaced by every entity e′ i in the KG, and the energy is calculated for the corrupted triple ⟨e′ i, rk, e j⟩. Ranking the energies in ascending order, we get the rank of the correct entity ei. Similarly, we can get another rank by corrupting the tail entity ej. Aggregated over all test triples, we report three metrics: 1) the averaged rank, denoted as Mean (the smaller, the better); 2) the median of the ranks, denoted as Median (the smaller, the better); and 3) the proportion of ranks no larger than 10, denoted as Hits@10 (the higher, the better). Implementation details. We implement the methods based on the code provided by Bordes et al. (2013)3. For all the methods, we create 100 mini-batches on each data set. On Locationand Sport, the dimension of the embedding space d is 3https://github.com/glorotxa/SME set in the range of {10, 20, 50, 100}, the margin γ is set in the range of {1, 2, 5, 10}, and the learning rate is fixed to 0.1. On Nell186, the hyperparameters d and γ are fixed to 50 and 1 respectively, and the learning rate is fixed to 10. In LE and LLE, the regularization hyperparameters λ1 and λ2 are tuned in {10−4, 10−5, 10−6, 10−7, 10−8}. And the number of nearest neighbors K in LLE is tuned in {5, 10, 15, 20}. The best model is selected by early stopping on the validation sets (by monitoring Mean), with a total of at most 1000 iterations over the training sets. Results. Table 4 reports the results on the test sets of Location, Sport, and Nell186. From the results, we can see that: 1) SSE (regularized via either LE or LLE) outperforms all the baselines on all the data sets and with all the metrics. The improvements are usually quite significant. The metric Mean drops by about 10% to 65%, Median drops by about 5% to 75%, and Hits@10 rises by about 5% to 190%. This observation demonstrates the superiority and generality of our approach. 2) Even if encoded in a direct way (e.g. TransE-Cat), the entity category information can still help the baseline methods in the link prediction task. This observation indicates that leveraging additional information is indeed useful in dealing with the data sparsity issue and hence leads to better performance. 3) Compared to the strategy which incorporates the entity category information directly, formulating such information as manifold regularization terms results in better and more stable results. The *-Cat models sometimes perform even worse than the baselines (e.g. TransE-Cat on Sportdata), while the SSE models consistently achieve better results. This observation further demonstrates the superiority of constraining the geometric structure of the embedding space. We further visualize and compare the geometric structures of the embedding spaces learned by traditional embedding and semantically smooth embedding. We select the 10 largest semantic categories in Nell186 (specified in Figure 1) and the 5,740 entities therein. We take the embeddings of these entities learned by TransE, TransE-Cat, TransE-LE, and TransE-LLE, with the optimal hyperparameter settings determined in the link prediction task. Then we create 2D plots using tSNE (Van der Maaten and Hinton, 2008)4. The results are shown in Figure 1, where a different 4http://lvdmaaten.github.io/tsne/ 89 Location Sport Nell186 Mean Median Hits@10 (%) Mean Median Hits@10 (%) Mean Median Hits@10 (%) TransE 30.94 10.70 50.56 362.66 62.90 43.86 924.37 94.00 16.95 TransE-Cat 28.48 8.90 52.43 320.30 86.40 37.46 657.53 80.50 19.14 TransE-LE 28.59 8.90 53.06 183.10 23.20 45.83 573.55 79.00 20.26 TransE-LLE 28.03 9.20 52.36 231.67 52.40 43.18 535.32 95.00 20.02 SME (lin) 63.01 24.10 40.90 266.50 87.10 32.34 427.86 26.00 35.97 SME (lin)-Cat 41.12 18.30 42.43 263.88 70.80 35.03 309.60 25.00 36.22 SME (lin)-LE 36.19 16.10 43.75 237.38 50.80 38.35 276.94 25.00 37.14 SME (lin)-LLE 38.22 15.60 43.96 241.70 63.70 36.54 252.87 25.00 37.14 SME (bilin) 47.66 20.90 37.85 314.49 124.00 33.83 848.39 28.00 35.71 SME (bilin)-Cat 40.75 16.20 42.71 298.09 103.80 35.86 560.76 24.00 37.83 SME (bilin)-LE 33.41 14.00 44.24 297.90 116.10 38.95 448.31 24.00 37.80 SME (bilin)-LLE 32.84 13.60 46.25 286.63 110.10 35.67 452.43 28.00 36.51 SE 108.15 69.90 14.72 426.70 242.60 24.72 904.84 44.00 27.81 SE-Cat 88.36 48.20 20.76 435.44 231.00 35.39 529.38 40.00 28.68 SE-LE 36.43 16.00 42.92 252.30 90.50 37.19 456.20 43.00 30.89 SE-LLE 38.47 17.50 42.08 235.44 105.40 37.83 447.05 37.00 31.55 Table 4: Link prediction results on the test sets of Location, Sport, and Nell186. Athlete Politicianus Chemical City Clothing Country Sportsteam Journalist Televisionstation Room (a) TransE. (b) TransE-Cat. (c) TransE-LE. (d) TransE-LLE. Figure 1: Embeddings of entities belonging to the 10 largest categories in Nell186 (best viewed in color). color is used for each category. It is easy to see that imposing the semantic smoothness assumptions helps in capturing the semantic correlation between entities in the embedding space. Entities within the same category lie closer to each other, while entities belonging to different categories are easily distinguished (see Figure 1(c) and Figure 1(d)). Incorporating the entity category information directly could also helps. But it fails on some “hard” entities (i.e., those belonging to different categories but mixed together in the center of Figure 1(b)). We have conducted the same experiments with the other methods and observed similar phenomena. 4.3 Triple Classification This task is to verify whether a given triple ⟨ei, rk, e j⟩is correct or not. We test our SSE models in this task, with the same comparison settings as used in the link prediction task. Evaluation protocol. We follow the same evaluation protocol used in (Socher et al., 2013; Wang et al., 2014b). To create labeled data for classification, for each triple in the test and validation sets, we construct a negative triple for it by randomly corrupting the entities. To corrupt a position (head or tail), only entities that have appeared in that position are allowed. During triple classification, a triple is predicted as positive if the energy is below a relation-specific threshold δr; otherwise as negative. We report two metrics on the test sets: micro-averaged accuracy and macro-averaged accuracy, denoted as Micro-ACC and Macro-ACC respectively. The former is a per-triple average, while the latter is a per-relation average. Implementation details. We use the same hyperparameter settings as in the link prediction task. The relation-specific threshold δr is determined by maximizing Micro-ACC on the validation sets. Again, training is limited to at most 1000 iterations, and the best model is selected by early stopping on the validation sets (by monitoring Micro-ACC). Results. Table 5 reports the results on the test sets of Location, Sport, and Nell186. The results indicate that: 1) SSE (regularized via either LE or LLE) performs consistently better than the base90 Location Sport Nell186 Micro-ACC Macro-ACC Micro-ACC Macro-ACC Micro-ACC Macro-ACC TransE 86.11 81.66 72.52 73.78 84.21 77.86 TransE-Cat 82.50 77.81 75.09 74.23 87.34 81.27 TransE-LE 86.39 81.50 79.88 77.34 90.32 84.61 TransE-LLE 87.01 83.03 80.29 77.71 90.08 84.50 SME (lin) 75.90 71.82 72.61 71.24 88.54 84.17 SME (lin)-Cat 83.33 80.90 73.52 72.28 91.00 86.20 SME (lin)-LE 84.65 79.33 79.25 74.95 92.44 88.07 SME (lin)-LLE 84.58 79.60 79.45 75.61 92.99 88.68 SME (bilin) 73.06 67.26 71.33 67.78 88.78 84.79 SME (bilin)-Cat 79.38 74.35 75.12 72.41 91.67 86.48 SME (bilin)-LE 83.75 79.66 79.23 76.18 93.37 89.29 SME (bilin)-LLE 83.54 80.36 79.33 75.35 93.64 89.39 SE 65.14 60.01 68.61 63.71 90.18 83.93 SE-Cat 68.61 62.82 67.62 62.17 92.87 87.72 SE-LE 81.67 77.52 81.46 74.72 93.94 88.62 SE-LLE 82.01 77.45 80.25 76.07 93.95 88.54 Table 5: Triple classification results (%) on the test sets of Location, Sport, and Nell186. line methods on all the data sets in both metrics. The improvements are usually quite substantial. The metric Micro-ACC rises by about 1% to 25%, and Macro-ACC by about 2% to 30%. 2) Incorporating the entity category information directly can also improve the baselines in the triple classification task, again demonstrating the effectiveness of leveraging additional information to deal with the data sparsity issue. 3) It is a better choice to incorporate the entity category information as manifold regularization terms as opposed to encoding it directly. The *-Cat models sometimes perform even worse than the baselines (e.g. TransECat on Locationdata and SE-Cat on Sportdata), while the SSE models consistently achieve better results. The observations are similar to those observed during the link prediction task, and further demonstrate the superiority and generality of our approach. 5 Related Work This section reviews two lines of related work: KG embedding and manifold learning. KG embedding aims to embed a KG composed of entities and relations into a low-dimensional vector space, and model the plausibility of each fact in that space. Yang et al. (2014) categorized the literature into three major groups: 1) methods based on neural networks, 2) methods based on matrix/tensor factorization, and 3) methods based on Bayesian clustering. The first group performs the embedding task using neural network architectures (Bordes et al., 2013; Bordes et al., 2014; Socher et al., 2013). Several state-of-the-art neural network-based embedding models have been introduced in Section 2. For other work please refer to (Jenatton et al., 2012; Wang et al., 2014b; Lin et al., 2015). In the second group, KGs are represented as tensors, and embedding is performed via tensor factorization or collective matrix factorization techniques (Singh and Gordon, 2008; Nickel et al., 2011; Chang et al., 2014). The third group embeds factorized representations of entities and relations into a nonparametric Bayesian clustering framework, so as to obtain more interpretable embeddings (Kemp et al., 2006; Sutskever et al., 2009). Our work falls into the first group, but differs in that it further imposes constraints on the geometric structure of the embedding space, which exists intrinsically but is overlooked in previous work. Although this paper focuses on incorporating geometrically based regularization terms into neural network architectures, it can be easily extended to matrix/tensor factorization techniques. Manifold learning is a geometrically motivated framework for machine learning, enforcing the learning model to be smooth w.r.t. the geometric structure of data (Belkin et al., 2006). Within this framework, various manifold learning algorithms have been proposed, such as ISOMAP (Tenenbaum et al., 2000), Laplacian Eigenmaps (Belkin and Niyogi, 2001), and Locally Linear Embedding (Roweis and Saul, 2000). All these algorithms are based on the so-called local invariance assumption, i.e., nearby points are likely to have similar embeddings or labels. Manifold learning has been widely applied in many different areas, from dimensionality reduction (Belkin and Niyo91 gi, 2001; Cai et al., 2008) and semi-supervised learning (Zhou et al., 2004; Zhu and Niyogi, 2005) to recommender systems (Ma et al., 2011) and community question answering (Wang et al., 2014a). This paper employs manifold learning algorithms to model the semantic smoothness assumptions in KG embedding. 6 Conclusion and Future Work In this paper, we have proposed a novel approach to KG embedding, referred to as Semantically Smooth Embedding (SSE). The key idea of SSE is to impose constraints on the geometric structure of the embedding space and enforce it to be semantically smooth. The semantic smoothness assumptions are constructed by using entities’ category information, and then formulated as geometrically based regularization terms to constrain the embedding task. The embeddings learned in this way are capable of capturing the semantic correlation between entities. By leveraging additional information besides observed triples, SSE can also deal with the data sparsity issue that commonly exists in most KGs. We empirically evaluate SSE in two benchmark tasks of link prediction and triple classification. Experimental results show that by incorporating the semantic smoothness assumptions, SSE significantly and consistently outperforms state-of-the-art embedding methods, demonstrating the superiority of our approach. In addition, our approach is quite general. The smoothness assumptions can actually be imposed to a wide variety of embedding models, and it can also be constructed using other information besides entities’ semantic categories. As future work, we would like to: 1) Construct the manifold regularization terms using other data sources. The only information required to construct the manifold regularization terms is the similarity between entities (used to define the adjacency matrix in LE and to select nearest neighbors for each entity in LLE). We would try entity similarities derived in different ways, e.g., specified by users or calculated from entities’ textual descriptions. 2) Enhance the efficiency and scalability of SSE. Processing the manifold regularization terms can be time- and space-consuming (especially the one induced by the LE algorithm). We would investigate how to address this problem, e.g., via the efficient iterative algorithms introduced in (Saul and Roweis, 2003) or via parallel/distributed computing. 3) Impose the semantic smoothness assumptions on other KG embedding methods (e.g. those based on matrix/tensor factorization or Bayesian clustering), and even on other embedding tasks (e.g. word embedding or sentence embedding). Acknowledgments We would like to thank the anonymous reviewers for their valuable comments and suggestions. This work is supported by the National Natural Science Foundation of China (grant No. 61402465), the Strategic Priority Research Program of the Chinese Academy of Sciences (grant No. XDA06030200), and the National Key Technology R&D Program (grant No. 2012BAH46B03). References Eneko Agirre, Oier Lopez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57–84. Mikhail Belkin and Partha Niyogi. 2001. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Advances in Neural Information Processing Systems, pages 585–591. Mikhail Belkin, Partha Niyogi, and Vikas Sindhwani. 2006. Manifold regularization: A geometric framework for learning from labeled and unlabeled examples. Journal of Machine Learning Research, 7:2399–2434. Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A collaboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, pages 1247–1250. Antoine Bordes, Jason Weston, Ronan Collobert, and Yoshua Bengio. 2011. Learning structured embeddings of knowledge bases. In Proceedings of the 25th AAAI Conference on Artificial Intelligence, pages 301–306. Antoine Bordes, Nicolas Usunier, Alberto GarciaDur´an, Jason Weston, and Oksana Yakhnenko. 2013. Translating embeddings for modeling multirelational data. In Advances in Neural Information Processing Systems, pages 2787–2795. Antoine Bordes, Xavier Glorot, Jason Weston, and Yoshua Bengio. 2014. A semantic matching energy function for learning with multi-relational data. Machine Learning, 94(2):233–259. 92 Deng Cai, Xiaofei He, Xiaoyun Wu, and Jiawei Han. 2008. Non-negative matrix factorization on manifold. In Proceedings of the 8th IEEE International Conference on Data Mining, pages 63–72. Andrew Carlson, Justin Betteridge, Bryan Kisiel, Burr Settles, Estevam R. Hruschka Jr, and Tom M. Mitchell. 2010. Toward an architecture for neverending language learning. In Proceedings of the 24th AAAI Conference on Artificial Intelligence, pages 1306–1313. Kai-Wei Chang, Wen-tau Yih, and Christopher Meek. 2013. Multi-relational latent semantic analysis. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1602–1612. Kai-Wei Chang, Wen-tau Yih, Bishan Yang, and Christopher Meek. 2014. Typed tensor decomposition of knowledge bases for relation extraction. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1568–1579. Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S. Weld. 2011. Knowledge-based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 541–550. Rodolphe Jenatton, Nicolas L. Roux, Antoine Bordes, and Guillaume R. Obozinski. 2012. A latent factor model for highly multi-relational data. In Advances in Neural Information Processing Systems, pages 3167–3175. Charles Kemp, Joshua B. Tenenbaum, Thomas L. Griffiths, Takeshi Yamada, and Naonori Ueda. 2006. Learning systems of concepts with an infinite relational model. In Proceedings of the 21st AAAI Conference on Artificial Intelligence, pages 381–388. Jens Lehmann, Robert Isele, Max Jakob, Anja Jentzsch, Dimitris Kontokostas, Pablo N. Mendes, Sebastian Hellmann, Mohamed Morsey, Patrick van Kleef, S¨oren Auer, et al. 2014. Dbpedia: A largescale, multilingual knowledge base extracted from wikipedia. Semantic Web Journal. Yankai Lin, Zhiyuan Liu, Maosong Sun, Yang Liu, and Xuan Zhu. 2015. Learning entity and relation embeddings for knowledge graph completion. In Proceedings of the 29th AAAI Conference on Artificial Intelligence, pages 2181–2187. Hao Ma, Dengyong Zhou, Chao Liu, Michael R. Lyu, and Irwin King. 2011. Recommender systems with social regularization. In Proceedings of the 4th ACM International Conference on Web Search and Data Mining, pages 287–296. Bernardo Magnini, Matteo Negri, Roberto Prevete, and Hristo Tanev. 2002. A wordnet-based approach to named entities recognition. In Proceedings of the 2002 Workshop on Building and Using Semantic Networks, pages 1–7. George A. Miller. 1995. Wordnet: A lexical database for english. Communications of the ACM, 38(11):39–41. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2011. A three-way model for collective learning on multi-relational data. In Proceedings of the 28th International Conference on Machine Learning, pages 809–816. Maximilian Nickel, Volker Tresp, and Hans-Peter Kriegel. 2012. Factorizing yago: Scalable machine learning for linked data. In Proceedings of the 21st International Conference on World Wide Web, pages 271–280. Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation extraction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference on North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 74–84. Sam T. Roweis and Lawrence K. Saul. 2000. Nonlinear dimensionality reduction by locally linear embedding. Science, 290(5500):2323–2326. Lawrence K. Saul and Sam T. Roweis. 2003. Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Journal of Machine Learning Research, 4:119–155. Geoffrey J. Singh and Ajit P. Gordon. 2008. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 650–658. Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Y. Ng. 2013. Reasoning with neural tensor networks for knowledge base completion. In Advances in Neural Information Processing Systems, pages 926–934. Ilya Sutskever, Joshua B. Tenenbaum, and Ruslan R. Salakhutdinov. 2009. Modelling relational data using bayesian clustered tensor factorization. In Advances in Neural Information Processing Systems, pages 1821–1828. Joshua B. Tenenbaum, Vin De Silva, and John C. Langford. 2000. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323. Laurens Van der Maaten and Geoffrey Hinton. 2008. Visualizing data using t-sne. Journal of Machine Learning Research, 9(85):2579–2605. 93 Quan Wang, Jing Liu, Bin Wang, and Li Guo. 2014a. A regularized competition model for question difficulty estimation in community question answering services. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1115–1126. Zhen Wang, Jianwen Zhang, Jianlin Feng, and Zheng Chen. 2014b. Knowledge graph embedding by translating on hyperplanes. In Proceedings of the 28th AAAI Conference on Artificial Intelligence, pages 1112–1119. Quan Wang, Bin Wang, and Li Guo. 2015. Knowledge base completion using embeddings and rules. In Proceedings of the 24th International Joint Conference on Artificial Intelligence. Jason Weston, Antoine Bordes, Oksana Yakhnenko, and Nicolas Usunier. 2013. Connecting language and knowledge bases with embedding models for relation extraction. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1366–1371. Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2014. Learning multi-relational semantics using neural-embedding models. arXiv preprint arXiv:1411.4072. Dengyong Zhou, Olivier Bousquet, Thomas Navin Lal, Jason Weston, and Bernhard Sch¨olkopf. 2004. Learning with local and global consistency. In Advances in Neural Information Processing Systems, pages 321–328. Xiaojin Zhu and Partha Niyogi. 2005. Harmonic mixtures: combining mixture models and graph-based methods for inductive and scalable semi-supervised learning. In Proceedings of the 22nd International Conference on Machine Learning, pages 1052– 1059. 94
2015
9
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 929–938, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Improving Named Entity Recognition in Tweets via Detecting Non-Standard Words Chen Li and Yang Liu Computer Science Department, The University of Texas at Dallas Richardson, Texas 75080, USA {chenli,[email protected]} Abstract Most previous work of text normalization on informal text made a strong assumption that the system has already known which tokens are non-standard words (NSW) and thus need normalization. However, this is not realistic. In this paper, we propose a method for NSW detection. In addition to the information based on the dictionary, e.g., whether a word is out-ofvocabulary (OOV), we leverage novel information derived from the normalization results for OOV words to help make decisions. Second, this paper investigates two methods using NSW detection results for named entity recognition (NER) in social media data. One adopts a pipeline strategy, and the other uses a joint decoding fashion. We also create a new data set with newly added normalization annotation beyond the existing named entity labels. This is the first data set with such annotation and we release it for research purpose. Our experiment results demonstrate the effectiveness of our NSW detection method and the benefit of NSW detection for NER. Our proposed methods perform better than the state-of-the-art NER system. 1 Introduction Short text messages or comments from social media websites such as Facebook and Twitter have become one of the most popular communication forms in recent years. However, abbreviations, misspelled words and many other non-standard words are very common in short texts for various reasons (e.g., length limitation, need to convey much information, writing style). They post problems to many NLP techniques in this domain. There are many ways to improve language processing performance on the social media data. One is to leverage normalization techniques to automatically convert the non-standard words into the corresponding standard words (Aw et al., 2006; Cook and Stevenson, 2009; Pennell and Liu, 2011; Liu et al., 2012a; Li and Liu, 2014; Sonmez and Ozgur, 2014). Intuitively this will ease subsequent language processing modules. For example, if ‘2mr’ is converted to ‘tomorrow’, a text-to-speech system will know how to pronounce it, a part-ofspeech (POS) tagger can label it correctly, and an information extraction system can identify it as a time expression. This normalization task has received an increasing attention in social media language processing. However, most of previous work on normalization assumed that they already knew which tokens are NSW that need normalization. Then different methods are applied only to these tokens. To our knowledge, Han and Baldwin (2011) is the only previous work which made a pilot research on NSW detection. One straight forward method to do this is to use a dictionary to classify a token into in-vocabulary (IV) words and out-of-vocabulary (OOV) words, and just treat all the OOV words as NSW. The shortcoming of this method is obvious. For example, tokens like ‘iPhone’, ‘PES’(a game name) and ‘Xbox’ will be considered as NSW, however, these words do not need normalization. Han and Baldwin (2011) called these OOV words correct-OOV, and named those OOV words that do need normalization as ill-OOV. We will follow their naming convention and use these two terms in our study. In this paper, we propose two methods to classify tokens in informal text into three classes: IV, correct-OOV, and ill-OOV. In the following, we call this task the NSW detection task, and these three labels NSW labels or classes. The novelty of our work is that we incorporate a token’s normalization information to assist this clas929 sification process. Our experiment results demonstrate that our proposed system gives a significant performance improvement on NSW detection compared with the dictionary baseline system. On the other hand, the impact of normalization or NSW detection on NER has not been well studied in social media domain. In this paper, we propose two methods to incorporate the NSW detection information: one is a pipeline system that just uses the predicted NSW labels as additional features in an NER system; the other one uses joint decoding, where we can simultaneously decide a token’s NSW and NER labels. Our experiment results show that our proposed joint decoding performs better than the pipeline method, and it outperforms the state-of-the-art NER system. Our contributions in this paper are as follows: (1) We proposed a NSW detection model by leveraging normalization information of the OOV tokens. (2) We created a data set with new NSW and normalization information, in addition to the existing NER labels. (3) It is the first time to our knowledge that an effective and joint approach is proposed to combine the NSW detection and NER techniques to improve the performance of these two tasks at the same time on social media data. (4) We demonstrate the effectiveness of our proposed method. Our proposed NER system outperforms the state-of-the-art system. 2 Related Work There has been a surge of interest in lexical normalization with the advent of social media data. Lots of approaches have been developed for this task, from using edit distance (Damerau, 1964; Levenshtein, 1966), to the noisy channel model (Cook and Stevenson, 2009; Pennell and Liu, 2010; Liu et al., 2012a) and machine translation method (Aw et al., 2006; Pennell and Liu, 2011; Li and Liu, 2012b; Li and Liu, 2012a). Normalization performance on some benchmark data has been improved a lot. Currently, unsupervised models are widely used to extract latent relationship between non-standard words and correct words from a huge corpus. Hassan and Menezes (2013) applied the random walk algorithm on a contextual similarity bipartite graph, constructed from n-gram sequences on a large unlabeled text corpus to build relation between non-standard tokens and correct words. Yang and Eisenstein (2013) presented a unified unsupervised statistical model, in which the relationship between the standard and non-standard words is characterized by a log-linear model, permitting the use of arbitrary features. Chrupała (2014) proposed a text normalization model based on learning edit operations from labeled data while incorporating features induced from unlabeled data via recurrent network derived character-level neural text embeddings. These studies only focused on how to normalize a given ill-OOV word and did not address the problem of detecting an ill-OOV word. Han and Baldwin (2011) is the only previous study that conducted the detection work. For any OOV word, they replaced it with its possible correct candidate, then if the possible candidate together with OOV’s original context adheres to the knowledge they learned from large formal corpora, the replacement could be considered as a better choice and that OOV token is classified as ill-OOV. In this paper, we propose a different method for NSW detection. Similar to (Han and Baldwin, 2011), we also use normalization information for OOV words, but we use a feature based learning approach. In order to improve robustness of NLP modules in social media domain, some works chose to design specific linguistic information. For example, by designing or annotating POS, chunking and capitalized information on tweets, (Ritter et al., 2011) proposed a system which reduced the POS tagging error by 41% compared with Stanford POS Tagger, and by 50% in NER compared with the baseline systems. Gimpel et al. (2011) created a specific set of POS tags for twitter data. With this tag set and word cluster information extracted from a huge Twitter corpus, their proposed system obtained significant improvement on POS tagging accuracy in Twitter data. At the same time, increasing research work has been done to integrate lexical normalization into the NLP tasks in social media data. Kaji and Kitsuregawa (2014) combined lexical normalization, word segmentation and POS tagging on Japanese microblog. They used rich character-level and word-level features from the state-of-the-art models of joint word segmentation and POS tagging in Japanese (Kudo et al., 2004; Neubig et al., 2011). Their model can also be trained on a partially annotated corpus. Li and Liu (2015) conducted a similar research on joint POS tagging and text normalization for English. Wang and Kan 930 (2013) proposed a method of joint ill-OOV word recognition and word segmentation in Chinese Microblog. But with their method, ill-OOV words are merely recognized and not normalized. Therefore, they did not investigate how to exploit the information that may be derived from normalization to increase word segmentation accuracy. Liu et al. (2012b) studied the problem of named entity normalization (NEN) for tweets. They proposed a novel graphical model to simultaneously conduct NER and NEN on multiple tweets. Although this work involved text normalization, it only focused on the NER task, and there was no reported result for normalization. On Turkish tweets, Kucuk and Steinberger (2014) adapted NER rules and resources to better fit Twitter language by relaxing its capitalization constraint, expanding its lexical resources based on diacritics, and using a normalization scheme on tweets. These showed positive effect on the overall NER performance. Rangarajan Sridhar et al. (2014) decoupled the SMS translation task into normalization followed by translation. They exploited bi-text resources, and presented a normalization approach using distributed representation of words learned through neural networks. In this study, we propose new methods to effectively integrate information of OOV words and their normalization for the NER task. In particular, by adopting joint decoding for both NSW detection and NER, we are able to outperform stateof-the-art results for both tasks. This is the first study that systematically evaluates the effect of OOV words and normalization on NER in social media data. 3 Proposed Method 3.1 NSW Detection Methods The task of NSW detection is to find those words that indeed need normalization. Note that in this study we only consider single-token ill-OOV words (both before and after normalization). For example, we would consider snds (sounds) as illOOV, but not smh (shaking my head). For a data set, our annotation process is as follows. We first manually label whether a token is ill-OOV and if so its corresponding standard word. We only consider tokens consisting of alphanumeric characters. Then based on a dictionary, the tokes that are not labeled as ill-OOV can be categorized into IV and OOV words. These OOV words will be considered as correct-OOV. Therefore all the tokens will have these three labels: IV, ill-OOV, and correct-OOV. Throughout this paper, we use GNU spell dictionary (v0.60.6.1) to determine whether a token is OOV.1 Twitter mentions (e.g., @twitter), hashtags and urls are excluded from consideration for OOV. Dictionary lookup of Internet slang2 is performed to filter those ill-OOV words whose correct forms are not single words. We propose two methods for NSW detection. The first one is a two-step method, where we first label a token as IV or OOV based on the given dictionary and some filter rules, then a statistical classifier is applied on those OOV tokens to further decide their classes: ill-OOV or correct-OOV. We use a maximum entropy classifier for this. The second model directly does 3-way classification to predict a token’s label to be IV, correct-OOV, or ill-OOV. We use a CRF model in this method.3 Table 1 shows the features used in these two methods. The first dictionary feature is not applicable for the two-step method because all the instances in that process have the same feature value ‘OOV’. However, this dictionary feature is an important feature for the 3-way classification model – a token with a feature value ‘IV’ has a very high probability of being ‘IV’. Lexical features focus on a token’s surface information to judge whether it is a regular English word or not. It is because most of correct-OOV words (e.g., location and person names) are still some regular words, complying with the general rules of word formation. For example, features 5-8 consider English word formation rules that at least one vowel character is needed for a correct word4. Feature 9 considers that a correct English word does not contain more than three consecutive same character. The character level language model used in Feature 10 is trained from a dictionary. A higher probability may indicate that it is a correct word. The motivation for the normalization features is 1We remove all the one-character tokens, except a and I. 25452 items are collected from http://www.noslang.com. 3We can also use a maximum entropy classifier to implement this model. Our experiments showed that using CRFs has slightly better results. But the main reason we adopt CRFs is because we use CRFs for NER, therefore we can easily integrate the two models in joint decoding in Section 3.2 for NER and NSW detection. We do not use CRFs in the two-step system because the labeling is performed on a subset of the words, not the entire sequence. 4Although some exceptions exist, this rule applies to most words. 931 Dictionary Feature 1. is token categorized as IV or OOV by the given dictionary (Only used in 3-way classification) Lexical Features 2. word identity 3. whether token’s first character is capitalized 4. token’s length 5. how many vowel character chunks does this token have 6. how many consonant character chunks does this token have 7. the length of longest consecutive vowel character chunk 8. the length of longest consecutive consonant character chunk 9. whether this token contains more than 3 consecutive same character 10. character level probability of this token based on a character level language model Normalization Features 11. whether each individual candidate list has any candidates for this token 12. how many candidates each individual candidate list has 13. whether each individual list’s top 10 candidates contain this token itself 14. the max number of lists that have the same top one candidate 15. the similarity value between each individual normalization system’s first candidate w and this token t, calculated by longest common string(w,t) length(t) 16. the similarity value between each individual normalization system’s first candidate w and this token t, calculated by longest common sequence(w,t) length(t) Table 1: Features used in NSW detection system. to leverage the normalization result of an OOV token to help its classification. Before we describe the reason why normalization information could benefit this task, we first introduce the normalization system we used. We apply a state-of-theart normalization system proposed by (Li and Liu, 2014). Briefly, in this normalization system there are three supervised and two unsupervised subsystems for each OOV token, resulting in six candidate lists (one system provides two lists). Then a maximum entropy reranking model is adopted to combine and rerank these candidate lists, using a rich set of features. Please refer to (Li and Liu, 2014) for more details. By analyzing each individual system, we find that for ill-OOV words most normalization systems can generate many candidates, which may contain a correct candidate; for correct-OOV words, many normalization systems have few candidates or may not provide any candidates. For example, only two of the six lists have candidates for the token Newsfeed and Metropcs. Therefore, we believe the patterns of these normalization results contain useful information to classify OOVs. Note that this kind of feature is only applicable for those tokens that are judged as OOV by the given dictionary (normalization is done on these OOV words). The bottom of Table 1 shows the normalization features we designed. 3.2 NER Methods The NER task we study in this paper is just about segmenting named entities, without identifying their types (e.g., person, location, organization). Following most previous work, we model it as a sequence-labeling task and use the BIO encoding method (each word either begins, is inside, or outside of a named entity). Intuitively, NSW detection has an impact on NER, because many named entities may have the correct-OOV label. Therefore, we investigate if we can leverage NSW label information for NER. First, we adopt a pipeline method, where we first perform NSW detection and the results are used as features in the NER system. Table 2 shows the features we designed. One thing worth mentioning is that the POS tags we used are from (Gimpel et al., 2011). This POS tag set consists of 25 coarsegrained tags designed for social media text. We use CRFs for this NER system. The above method simply incorporates a token’s predicted NSW label as features in the NER model. Obviously it has an unavoidable limitation – the errors from the NSW detection model would affect the downstream NER process. Therefore we propose a second method, a joint decoding process to determine a token’s NSW and NER label at the same time. The 3-way classification method for NSW detection and the above NER system both use CRFs. The decoding process for these two tasks is performed separately, using their corresponding trained models. The motivation of our proposed joint decoding process is to combine the 932 two processes together, therefore we can avoid the error propagation in the pipeline system, and allow the two models to benefit from each other. Part (A) and (B) of Figure 1 show the trellis for decoding word sequence ‘Messi is well-known’ in the NER and NSW detection systems respectively. As shown in (A), every black box with dashed line is a hidden state (possible BIO tag) for the corresponding token. Two sources of information are used in decoding. One is the label transition probability p(yi|yj), from the trained model, where yi and yj are two BIO tags. The other is p(yi|ti), where yi is a BIO label for token ti. Similarly, during decoding in NSW detection, we need the Basic Features 1. Lexical features (word n-gram): Unigram: Wi(i = 0) Bigram: WiWi+1(i = −2, −1, 0, 1) Trigram: Wi−1WiWi+1(i = −2, −1, 0, 1) 2. POS features (POS n-gram): Unigram: Pi(i = 0) Bigram: PiPi+1(i = −2, −1, 0, 1) Trigram: Pi−1PiPi+1(i = −2, −1, 0, 1) 3. Token’s capitalization information: Trigram: Ci−1CiCi+1(i = 0) (Ci = 1 means this token’s first character is capitalized.) Additional Features by Incorporating Predicted NSW Label 4. Token’s dictionary categorization label: Unigram: Di(i = 0) Bigram: DiDi+1(i = −2, −1, 0, 1) Trigram: Di−1DiDi+1(i = −2, −1, 0, 1) 5. Token’s predicted NSW label: Unigram: Li(i = 0) Bigram: LiLi+1(i = −2, −1, 0, 1) Trigram: Li−1LiLi+1(i = −2, −1, 0, 1) 6. Compound features using lexical and NSW labels: WiDi, WiLi, WiDiLi(i = 0) 7. Compound features using POS and NSW labels: PiDi, PiLi, PiDiLi(i = 0) 8. Compound features using word, POS, and NSW labels: WiPiDiLi(i = 0) Table 2: Features used in the NER System. W and P represent word and POS. D and L represent labels classified by the dictionary and 3-way NSW detection system. Subscripts i, i −1 and i + 1 indicate the word position. For example, when i equals to -1, i + 1 means the current word. probability of p(oi|oj) and p(oi|ti). The only difference is that oi is a NSW label. Part (C) of Figure 1 shows the trellis used in our proposed joint decoding approach for NSW detection and NER. In this figure, three places are worth pointing out: (1) the label is a combination of NSW and NER labels, and thus there are nine in total; (2) the label transition probability is a linear sum of the previous two transition probabilities: p(yi oi|yj oj) = p(yi|yj) + β ∗p(oi|oj), where yi and yj are BIO tags and oi and oj are NSW tags; (3) similarly, p(yi oi|ti) equals to p(yi|ti) + α ∗p(oi|ti). Please note all these probabilities are log probabilities and they are trained separately from each system. 4 Data and Experiment 4.1 Data Set and Experiment Setup The NSW detection model is trained using the data released by (Li and Liu, 2014). It has 2,577 Twitter messages (selected from the Edinburgh Twitter corpus (Petrovic et al., 2010)), in which there are 2,333 unique pairs of NSW and their standard words. This data is used for training the different normalization models. We labeled this data set using the given dictionary for NSW detection. 4,121 tokens are labeled as ill-OOV, 1,455 as correctOOV, and the rest 33,740 tokens are IV words. We have two test sets for evaluating the NSW detection system. One is from (Han and Baldwin, 2011), which includes 549 tweets. Each tweet contains at least one ill-OOV and the corresponding correct word. We call it Test set 1 in the following. The other is from (Li and Liu, 2015), who further processed the tweets data from (Owoputi et al., 2013). Briefly, Owoputi et al. (2013) released 2,347 tweets with their designed POS tags for social media text, and then Li and Liu (2015) further annotated this data with normalization information for each token. The released data by (Li and Liu, 2015) contains 798 tweets with ill-OOV. We use these 798 tweets as the second data set for NSW detection, and call it Test set 2 in the following. In addition, we use all of these 2,347 tweets to train a POS model which then is used to predict tokens’ POS tags for NER (see Section 3.2 about the POS tags). The CRF model is implemented using the pocket-CRF toolkit5. The SRILM toolkit (Stolcke, 2002) is used to build the character-level language model (LM) for generating the LM features in NSW detection system. 5http://sourceforge.net/projects/pocket-crf-1/ 933 is Messi p(B|B) well-known p(B|Messi) B I O B I O B I O p(I|Messi) p(O|Messi) p(I|B) p(O|B) is Messi p(IV|IV) well-known p(correct-OOV|Messi) IV correct-OOV Ill-OOV IV correct-OOV Ill-OOV IV correct-OOV Ill-OOV p(Ill-OOV|Messi) p(IV|Messi) p(correct-OOV|IV) p(ill-OOV|IV) Messi p(B|Messi)+ α* p(IV|Messi) B_IV B_correct-OOV B_ill-OOV I_IV . . . p(B|is)+ α* p(correct-OOV|is) B_IV B_correct-OOV B_ill-OOV I_IV is p(B|B)+ β * p(IVIIV) p(I|B)+ β * p(IVIIV) . . . p(B|well-known)+ α* p(ill-OOV|well-known) B_IV B_correct-OOV B_ill-OOV I_IV well-known . . . p(B|B)+ β * p(ill-OOVIIV) p(I|B)+ β * p(IVIill-OOV) (C) (B) (A) Figure 1: Trellis Viterbi decoding for different systems. The data with the NER labels are from (Ritter et al., 2011) who annotated 2,396 tweets (34K tokens) with named entities, but there is no information on the tweets’ ill-OOV words. In order to evaluate the impact of ill-OOV on NER, we ask six annotators to annotate the ill-OOV words and the corresponding standard words in this data. There are only 1,012 sentences with ill-OOV words. We use all the sentences (2,396) for the NER experiments. This data set,6 to our knowledge, is the first one having both ill-OOV and NER annotation in social media domain. For joint decoding, the parameters α and β are empirically set as 0.95 and 0.5. 4.2 Experiment Results 4.2.1 NSW Detection Results For NSW detection, we compared our two proposed systems on the two test sets described above, and also conducted different experiments to investigate the effectiveness of different features. We use the categorization of words by the dictionary as the baseline for this task. Table 3 shows the results for three NSW detection systems. We use Recall, Precision and F value for the ill-OOV class as the evaluation metrics. The Dictionary baseline can only recognize the token as IV and OOV, and thus label all the OOV words as ill-OOV. Both the two-step and the 3-way classification methods in Table 3 leverage all the features described 6http://www.hlt.utdallas.edu/∼chenli/normalization ner in Table 1. First note because of the property of the two-step method (it further divides the OOV words from the dictionary-based method into illOOV and correct-OOV), the upper bound of its recall is the recall of the dictionary based method. We can see that in Test set 1, both the two-step and the 3-way classification methods have a significant improvement compared to the Dictionary method. However, in Test set 2, the two-step method performs much worse than that of the 3-way classification method, although it outperforms the dictionary method. This can be attributed to the characteristics of that data set and also the system’s upper bounded recall. We will provide a more detailed analysis in the following feature analysis part. Table 4 and 5 show the performance of the two systems on the two test sets with different features. Note that the dictionary feature is not applicable to the two-step method, and the results for the twostep method using dictionary feature (feature 1, first line in the tables) are the same as the dictionary baseline in Table 3. From these two tables, we can see that: (1) For both systems, normalization features (11∼16) and lexical features (2∼10) both perform better than the dictionary feature. (2) In general, the combination of any two kinds of features has better performance than any one feature type. Using all the features (results shown in Table 3) yields the best performance, which significantly improves the performance compared with the baseline. (3) There are some differences across 934 the two data sets in terms of the feature effectiveness on the two methods. On Test set 2, when lexical features are combined with other features (forth and fifth line of Table 5), the 3-way classification method significantly outperforms the twostep method. It is because this data set has a large number of ill-OOV words that are dictionary words. For example, token ‘its’ appears 31 times as ill-OOV, ‘ya’ 13 times, and ‘bro’ 10 times. Such ill-OOV words occur more than two hundred times in total. Since these tokens are included in the dictionary, they are already classified as IV by the dictionary, and their label will not change in the second step. This is also the reason why in Table 3, the performance of 3-way classification is significantly better than that of the two-step method using all the features. However, we also find that when we only use lexical features (2∼10), the two methods have similar performance on Test set 2, but the two-step method has much better performance than the 3-way classifier method on Test set 1. We believe this shows that lexical features themselves are not reliable for the NSW detection task, and other information such as normalization features may be more stable. System Test Set 1 Test Set 2 R P F R P F Dictionary 88.73 72.35 79.71 67.87 69.59 68.72 Two-step 81.66 88.74 85.05 57.60 90.04 70.26 3-way 87.63 83.49 85.51 73.53 90.42 81.10 Table 3: NSW detection results. Features Two-Step 3-way Classification R P F R P F 1 88.73 72.35 79.71 87.13 70.04 77.66 2∼10 87.21 77.44 82.04 82.59 67.49 74.28 11∼16 86.45 78.77 82.43 91.75 74.97 82.51 1∼10 76.78 92.87 84.07 77.12 93.09 84.36 2∼16 81.16 89.02 84.90 87.13 86.54 85.30 1,11∼16 78.30 91.00 84.17 78.55 93.77 85.48 Table 4: Feature impact on NSW detection on Test Set 1. The feature number corresponds to that in Table 1. 4.2.2 NER Results For the NER task, in order to make a fair comparison with (Ritter et al., 2011), we conducted 4-fold cross validation experiments as they did. First we present the result on the NSW detection task on this date set when using our proposed joint deFeatures Two-Step 3-way Classification R P F R P F 1 67.86 69.59 68.72 66.45 64.27 65.34 2∼10 64.33 79.52 71.12 69.56 76.26 72.76 11∼16 53.78 91.34 67.70 54.35 91.42 68.17 1∼10 63.12 81.53 71.16 78.41 81.65 80.00 2∼16 56.40 89.02 69.06 72.32 90.28 80.31 1,11∼16 56.40 92.35 70.03 56.68 92.81 70.38 Table 5: Feature impact on NSW detection on Test Set 2. coding method integrating NER and NSW. This is done using the 1,012 sentences that contain illOOV words. Table 6 shows such results on the NER data described in Section 4.1. The 3-way classification method for NSW detection is used as a baseline here. It is the same model as used in the previous section, and applied to the entire NER data. For each cross validation experiment of the joint decoding method, the NSW detection model is kept the same (from 3-way classification method), but NER model is tested on 1/4 of the data and trained from the remaining 3/4 of the data. From the Table 6, we can see that joint decoding yields some marginal improvement for the NSW detection task. System R P F 3-way classification 58.65 72.83 64.97 Joint decoding w all features 59.53 72.96 65.56 Table 6: NSW detection results on the data from (Ritter et al., 2011) with our new NSW annotation. In the following, we will focus on the impact of NSW detection on NER. Table 7 shows the NER performance from different systems on the data with NER and NSW labels. From this table, we can see that when using our pipeline system, adding NSW label features has a significant improvement compared to the basic features. The F value of 67.4% when using all the features is even higher than the state-of-the-art performance from (Ritter et al., 2011). Please note that Ritter et al. (2011) used much more information than us for this task, such as dictionaries including a set of type lists gathered from Freebase, brown clusters, and outputs of their specifically designed chunk and capitalization labels components7. Then they 7The chunk and capitalization components are specially created by them for social media domain data. Then they created a data set to train these models. 935 improved their baseline performance from 65% to the reported best one at 67%. However, we only added our predicted NSW labels and related features, and we already achieved similar or slightly better results. Using joint decoding can further boost the performance to 69%. System R P F Pipeline w basic features 55.85 74.33 63.76 Pipeline w all features 60.00 77.09 67.40 Joint decoding w all features 73.56 65.02 69.00 (Ritter et al., 2011) 73.00 61.00 67.00 Table 7: NER results from different systems on data from (Ritter et al., 2011). Table 8 shows the impact of different features. This analysis is based on the pipeline system. First, we can see that adding feature 4 and 5 (Uni-, Bi- and Tri-gram of the dictionary and predicted NSW labels) yields the most improvement compared with other features, and between these two kinds of features, using predicted NSW labels is better than the dictionary labels. It also shows the effectiveness of our NSW detection system. Second, comparing adding feature 6 and 7, it shows that combination of word/POS and its dictionary or NSW label is not as good as only considering the label’s n-gram. We also explored various other n-gram features, but did not find any that outperformed feature 4 or 5. Another finding is that the POS related features are not as good as that of words. Features R P F Basic 55.85 74.33 63.76 Basic + 4 57.71 75.04 65.23 Basic + 5 57.47 75.87 65.37 Basic + 6 56.53 74.20 64.12 Basic + 7 56.13 74.66 64.06 Basic + 8 57.14 74.55 64.66 Table 8: Pipeline NER performance using different features. The feature number corresponds to that in Table 2. 4.2.3 Error Analysis A detailed error analysis further shows what improvement our proposed method makes and what errors it is still making. For example, for the tweet ‘Watching the VMA pre-show again ...’, the token VMA is annotated as B-tvshow in NER labels. Without using predicted NSW labels, the baseline system labels this token as O (outside of named entity). However, after using the NSW predicted label correct-OOV and related features, the pipeline NER system predicts its label as B. We noticed that joint decoding can solve some complicated cases that are hard for the pipeline system, especially for some OOVs, or when there are consecutive named entity tokens. For example, in a tweet, ‘Let’s hope the Serie A continues to be on the tv schedule next week’, Seria A is a proper noun (meaning Italian soccer league). The annotation for Seria and A is correct-OOV/B and IV/I. We find the joint decoding system successfully labels A as I after Seria is labeled as B. However, the pipeline system labels A as O even it correctly labels Seria. Take another example, in a tweet ‘I was gonna buy a Zune HD ...’, Zune HD is consecutive named entities. The pipeline system recognized Zune as correct-OOV and HD as ill-OOV, then labeled both them as O. But the joint decoding system identified HD as correct-OOV and labeled ‘Zune HD’ as B and I. These changes may have happened because of adjusting the transition probability and observation probability during Viterbi decoding. 5 Conclusion and Future Work In this paper, we proposed an approach to detect NSW. This makes the lexical normalization task as a complete applicable process. The proposed NSW detection system leveraged normalization information of an OOV and other useful lexical information. Our experimental results show both kinds of information can help improve the prediction performance on two different data sets. Furthermore, we applied the predicted labels as additional information for the NER task. In this task, we proposed a novel joint decoding approach to label every token’s NSW and NER label in a tweet at the same time. Again, experimental results demonstrate that the NSW label has a significant impact on NER performance and our proposed method improves performance on both tasks and outperforms the best previous results in NER. In future work, we propose to pursue a number of directions. First, we plan to consider how to conduct NSW detection and normalization at the same time. Second, we like to try a joint method to 936 simultaneously train the NSW detection and NER models, rather than just combining models in decoding. Third, we want to investigate the impact of NSW and normalization on other NLP tasks such as parsing in social media data. Acknowledgments We thank the anonymous reviewers for their detailed and insightful comments on earlier drafts of this paper. The work is partially supported by DARPA Contract No. FA8750-13-2-0041. Any opinions, findings, and conclusions or recommendations expressed are those of the authors and do not necessarily reflect the views of the funding agencies. References Aiti Aw, Min Zhang, Juan Xiao, Jian Su, and Jian Su. 2006. A phrase-based statistical model for sms text normalization. In Processing of COLING/ACL. Grzegorz Chrupała. 2014. Normalizing tweets with edit scripts and recurrent neural embeddings. In Proceedings of ACL. Paul Cook and Suzanne Stevenson. 2009. An unsupervised model for text message normalization. In Proceedings of NAACL. Fred J Damerau. 1964. A technique for computer detection and correction of spelling errors. Communications of the ACM, 7(3):171–176. Kevin Gimpel, Nathan Schneider, Brendan O’Connor, Dipanjan Das, Daniel Mills, Jacob Eisenstein, Michael Heilman, Dani Yogatama, Jeffrey Flanigan, and Noah A. Smith. 2011. Part-of-speech tagging for twitter: Annotation, features, and experiments. In Proceedings of ACL. Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceeding of ACL. Hany Hassan and Arul Menezes. 2013. Social text normalization using contextual graph random walks. In Proceedings of ACL. Nobuhiro Kaji and Masaru Kitsuregawa. 2014. Accurate word segmentation and pos tagging for Japanese microblogs: Corpus annotation and joint modeling with lexical normalization. In Proceedings of EMNLP. Dilek Kucuk and Ralf Steinberger. 2014. Experiments to improve named entity recognition on turkish tweets. In Proceedings of Workshop on Language Analysis for Social Media (LASM) on EACL. Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to Japanese morphological analysis. In Proceedings of EMNLP. Vladimir I Levenshtein. 1966. Binary codes capable of correcting deletions, insertions and reversals. In Soviet physics doklady, volume 10, page 707. Chen Li and Yang Liu. 2012a. Improving text normalization using character-blocks based models and system combination. In Proceedings of COLING 2012. Chen Li and Yang Liu. 2012b. Normalization of text messages using character- and phone-based machine translation approaches. In Proceedings of 13th Interspeech. Chen Li and Yang Liu. 2014. Improving text normalization via unsupervised model and discriminative reranking. In Proceedings of ACL. Chen Li and Yang Liu. 2015. Joint POS tagging and text normalization for informal text. In Proceedings of IJCAI. Fei Liu, Fuliang Weng, and Xiao Jiang. 2012a. A broad-coverage normalization system for social media language. In Proceedings of ACL. Xiaohua Liu, Ming Zhou, Xiangyang Zhou, Zhongyang Fu, and Furu Wei. 2012b. Joint inference of named entity recognition and normalization for tweets. In Proceedings of ACL. Graham Neubig, Yosuke Nakata, and Shinsuke Mori. 2011. Pointwise prediction for robust, adaptable Japanese morphological analysis. In Proceedings of ACL. Olutobi Owoputi, Brendan O’Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A. Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In Proceedings of NAACL. Deana Pennell and Yang Liu. 2010. Normalization of text messages for text-to-speech. In ICASSP. Deana Pennell and Yang Liu. 2011. A character-level machine translation approach for normalization of sms abbreviations. In Proceedings of IJCNLP. Sasa Petrovic, Miles Osborne, and Victor Lavrenko. 2010. The Edinburgh twitter corpus. In Proceedings of NAACL. Vivek Kumar Rangarajan Sridhar, John Chen, Srinivas Bangalore, and Ron Shacham. 2014. A framework for translating SMS messages. In Proceedings of COLING. Alan Ritter, Sam Clark, and Oren Etzioni. 2011. Named entity recognition in tweets: an experimental study. In Proceedings of EMNLP. 937 Cagil Sonmez and Arzucan Ozgur. 2014. A graphbased approach for contextual text normalization. In Proceedings of EMNLP. Andreas Stolcke. 2002. SRILM-an extensible language modeling toolkit. In Proceedings International Conference on Spoken Language Processing. Aobo Wang and Min-Yen Kan. 2013. Mining informal language from Chinese microtext: Joint word recognition and segmentation. In Proceedings of ACL. Yi Yang and Jacob Eisenstein. 2013. A log-linear model for unsupervised text normalization. In Proceedings of EMNLP. 938
2015
90
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 939–949, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics A Unified Kernel Approach for Learning Typed Sentence Rewritings Martin Gleize LIMSI-CNRS, Orsay, France Universit´e Paris-Sud, Orsay, France [email protected] Brigitte Grau LIMSI-CNRS, Orsay, France ENSIIE, Evry, France [email protected] Abstract Many high level natural language processing problems can be framed as determining if two given sentences are a rewriting of each other. In this paper, we propose a class of kernel functions, referred to as type-enriched string rewriting kernels, which, used in kernel-based machine learning algorithms, allow to learn sentence rewritings. Unlike previous work, this method can be fed external lexical semantic relations to capture a wider class of rewriting rules. It also does not assume preliminary syntactic parsing but is still able to provide a unified framework to capture syntactic structure and alignments between the two sentences. We experiment on three different natural sentence rewriting tasks and obtain state-of-the-art results for all of them. 1 Introduction Detecting implications of sense between statements stands as one of the most sought-after goals in computational linguistics. Several high level tasks look for either one-way rewriting between single sentences, like recognizing textual entailment (RTE) (Dagan et al., 2006), or two-way rewritings like paraphrase identification (Dolan et al., 2004) and semantic textual similarity (Agirre et al., 2012). In a similar fashion, selecting sentences containing the answer to a question can be seen as finding the best rewritings of the question among answer candidates. These problems are naturally framed as classification tasks, and as such most current solutions make use of supervised machine learning. They have to tackle several challenges: picking an adequate language representation, aligning semantically equivalent elements and extracting relevant features to learn the final decision. Bag-of-words and by extension bag-of-ngrams are traditionally the most direct approach and features rely mostly on lexical matching (Wan et al., 2006; Lintean and Rus, 2011; Jimenez et al., 2013). Moreover, a good solving method has to account for typically scarce labeled training data, by enriching its model with lexical semantic resources like WordNet (Miller, 1995) to bridge gaps between surface forms (Mihalcea et al., 2006; Islam and Inkpen, 2009; Yih et al., 2013). Models based on syntactic trees remain the typical choice to account for the structure of the sentences (Heilman and Smith, 2010; Wang and Manning, 2010; Socher et al., 2011; Calvo et al., 2014). Usually the best systems manage to combine effectively different methods, like Madnani et al.’s meta-classifier with machine translation metrics (Madnani et al., 2012). A few methods (Zanzotto et al., 2007; Zanzotto et al., 2010; Bu et al., 2012) use kernel functions to learn what makes two sentence pairs similar. Building on this work, we present a typeenriched string rewriting kernel giving the opportunity to specify in a fine-grained way how words match each other. Unlike previous work, rewriting rules learned using our framework account for syntactic structure, term alignments and lexicosemantic typed variations in a unified approach. We detail how to efficiently compute our kernel and lastly experiment on three different high-level NLP tasks, demonstrating the vast applicability of our method. Our system based on type-enriched string rewriting kernels obtains state-of-the-art results on paraphrase identification and answer sentence selection and outperforms comparable methods on RTE. 2 Type-Enriched String Rewriting Kernel Kernel functions measure the similarity between two elements. Used in machine learning methods 939 like SVM, they allow complex decision functions to be learned in classification tasks (Vapnik, 2000). The goal of a well-designed kernel function is to have a high value when computed on two instances of same label, and a low value for two instances of different label. 2.1 String rewriting kernel String rewriting kernels (Bu et al., 2012) count the number of common rewritings between two pairs of sentences seen as sequences of words. The rewriting rule (A) in Figure 1 can be viewed as a kind of phrasal paraphrase with linked variables (Madnani and Dorr, 2010). Rule (A) rewrites (B)’s first sentence into its second but it does not however rewrite the sentences in (C), which is what we try to fix in this paper. Following the terminology of string kernels, we use the term string and character instead of sentence and word. We denote (s, t) ∈(Σ∗× Σ∗) an instance of string rewriting, with a source string s and a target string t, both finite sequences of elements in Σ the finite set of characters. Suppose that we are given training data of such instances labeled in {+1, −1}, for paraphrase/nonparaphrase or entailment/non-entailment in applications. We can use a kernel method to train on this data and learn to automatically classify unlabeled instances. A kernel on string rewriting instances is a map: K : (Σ∗× Σ∗) × (Σ∗× Σ∗) →R such that for all (s1, t1), (s2, t2) ∈Σ∗× Σ∗, K((s1, t1), (s2, t2)) = ⟨Φ(s1, t1), Φ(s2, t2)⟩(1) where Φ maps each instance into a high dimension feature space. Kernels allow us to avoid the potentially expensive explicit representation of Φ through the inner product space they define. The purpose of the string rewriting kernels is to measure the similarity between two pairs of strings in term of the number of rewriting rules of a set R that they share. Φ is thus naturally defined by Φ(s, t) = (φr(s, t))r∈R with φr(s, t) = n the number of contiguous substring pairs of (s, t) that rewriting rule r matches. 2.2 Typed rewriting rules Let the wildcard domain D ⊆Σ∗be the set of strings which can be replaced by wildcards. We now present the formal framework of the typeenriched string rewriting kernels. Let Γp be the set of pattern types and Γv the set of variable types. To a type γp ∈Γp, we associate the typing relation γp≈ ⊆Σ × Σ. To a type γv ∈Γv,we associate the typing relation γv ; ⊆D × D. Together with the typing relations, we call the association of Γp and Γv the typing scheme of the kernel. Let Σp be defined as Σp = [ γ∈Γ {[a|b] | ∃a, b ∈Σ, a γ≈b} (2) We finally define typed rewriting rules. A typed rewriting rule is a triple r = (βs, βt, τ), where βs, βt ∈(Σp ∪{∗})∗denote source and target string typed patterns and τ ⊆ind∗(βs)×ind∗(βt) denotes the alignments between the wildcards in the two string patterns. Here ind∗(β) denotes the set of indices of wildcards in β. We say that a rewriting rule (βs, βt, τ) matches a pair of strings (s, t) if and only if the following conditions are true: • string patterns βs, resp. βt, can be turned into s, resp. t, by: – substituting each element [a|b] of Σp in the string pattern with an a or b (∈Σ) – substituting each wildcard in the string pattern with an element of the wildcard domain D • ∀(i, j) ∈τ, s, resp. t, substitutes the wildcards at index i, resp. j, by s∗∈D, resp. t∗, such that there exists a variable type γ ∈Γv with s∗ γ; t∗. A type-enriched string rewriting kernel (TESRK) is simply a string rewriting kernel as defined in Equation 1 but with R a set of typed rewriting rules. This class of kernels depends on wildcard domain D and the typed rewriting rules R which can be tuned to allow for more flexibility in the matching of pairs of characters in a rewriting rule. Within this framework, the k-gram bijective string rewriting kernel (kb-SRK) is defined by the wildcard domain D = Σ and the ruleset R = {(βs, βt, τ) | βs, βt ∈(Σp∪{∗})k, τ bijective} under Γp = Γv = {id} with a id≈b, resp. a id ; b, if and only if a = b. 940 heard was I heard Mary shouting. Mary was shouting. I caught him snoring. He was sleeping. (A) (B) (C) Figure 1: Rewriting rule (A) matches pair of strings (B) but does not match (C). We now present an example of how kb-SRK is applied to real pairs of sentences, what its limitations are and how we can deal with them by reworking its typing scheme. Let us consider again Figure 1, (A) is a rewriting rule with βs = (heard, ∗, ∗), βt = (∗, was, ∗), τ = {(2, 1); (3, 3)}. Each string pattern has the same length, and pairs of wildcards in the two patterns are aligned bijectively. This is a valid rule for kb-SRK. It matches the pair of strings (B): each aligned pair of wildcards is substituted in source and target sentences by the same word and string patterns of (A) can indeed be turned into pairs of substrings of the sentences. However, it cannot match the pair of sentences (C) in the original kb-SRK. We change Γp to {hypernym, id} where a hypernym ≈ b if and only if a and b have a common hypernym in WordNet. And we change Γv to Γv = {same pronoun, entailment, id} where a same pronoun ; b if and only if a and b are a pronoun of the same person and same number, and a entailment ; b if and only if verb a has a relation of entailment with b in WordNet. By redefining the typing scheme, rule (A) can now match (C). 3 Computing TESRK 3.1 Formulation The k-gram bijective string rewriting kernel can be computed efficiently (Bu et al., 2012). We show that we can compute its type-enriched equivalent at the price of a seemingly insurmountable loosening of theoretical complexity boundaries. Experiments however show that its computing time is of the same order as the original kernel. A type-enriched kb-SRK is parameterized by k the length of k-grams, and its typing scheme the sets Γp and Γv and their associated relations. The annotations of Γp and Γv to Kk and ¯Kk will be omitted for clarity and because they typically will not change while we test different values for k. We rewrite the inner product in Equation 1 to better fit the k-gram framework: Kk((s1, t1), (s2, t2)) = X αs1 ∈k-grams(s1) αt1 ∈k-grams(t1) X αs2 ∈k-grams(s2) αt2 ∈k-grams(t2) ¯Kk((αs1, αt1), (αs2, αt2)) (3) where ¯Kk is the number of different rewriting rules which match two pairs of k-grams (the same rule cannot trigger twice in k-gram substrings): ¯Kk((αs1, αt1), (αs2, αt2)) = X r∈R 1r(αs1, αt1)1r(αs2, αt2) (4) with 1r the indicator function of rule r: 1 if r matches the pair of k-grams, 0 otherwise. Computing Kk as defined in Equation 3 is obviously intractable. There is O((n −k + 1)4) terms in the sum, where n is the length of the longest string, and each term involves enumerating every rewriting rule in R. 3.2 Computing ¯Kk in type-enriched kb-SRK Enumerating all rewriting rules in Equation 4 is itself intractable: there are more than |Σ|2k rules without wildcards, where |Σ| is conceivably the size of a typical lexicon. In fact, we just have to constructively generate the rules which substitute their string patterns correctly to simultaneously produce both pairs of k-grams (αs1, αt1) and (αs2, αt2). Let the operator ⊗be such that α1 ⊗α2 = ((α1[1], α2[1]), ..., (α1[k], α2[k])). This operation is generally known as zipping in functional programming. We use the function CountPerfectMatchings computed by Algorithm 1 to recursively count the number of rewriting rules matching both (αs1, αt1) and (αs2, αt2). The workings of the algorithm will make clearer why we can compute ¯Kk with the following formula: ¯Kk((αs1, αt1), (αs2, αt2)) = CountPerfectMatchings(αs1 ⊗αs2, αt1 ⊗αt2) (5) 941 Algorithm 1 takes as input remaining character pairs in αs1 ⊗αs2 and αt1 ⊗αt2, and outputs the number of ways they can substitute aligned wildcards in a matching rule. First (lines 2 and 3) we have the base case where both remaining sets are empty. There is exactly 1 way the empty set’s wildcards can be aligned with each other: nothing is aligned. In lines 4 to 9, there is no source pairs anymore, so the algorithm continues to deplete target pairs as long as they have a common pattern type, i.e. as long as they do not have to substitute a wildcard. If a candidate wildcard is found, as the opposing set is empty, we cannot align it and we return 0. In the general case (lines 11 to 19), consider the first character pair (a1, a2) in the reminder of αs1 ⊗αs2 in line 12. What follows in the computation depends on its types. Every character pair in αt1 ⊗αt2 that can be paired through variable types with (a1, a2) (lines 15 to 19) is a new potential wildcard alignment, so we try all the possible alignment and recursively continue the computation after removing both aligned pairs. And if (a1, a2) does not need to substitute a wildcard because it has common pattern types (lines 13 and 14), we can choose to not create any wildcard pairing with it and ignore it in the recursive call. This algorithm enumerates all configurations such that each character pair has a common pattern type or is matched 1-for-1 with a character pair with common variable types, which is exactly the definition of a rewriting rule in TESRK. This problem is actually equivalent to counting the perfect matchings of the bipartite graph of potential wildcards. It has been shown intractable (Valiant, 1979) and Algorithm 1 is a naive recursive algorithm to solve it. In our implementation we represent the graph with its biadjacency matrix, and if our typing relations are independent of k, the function has a O(k) time complexity without including its recursive calls. The number of recursive calls can be greater than k!2 which is the number of perfect matchings in a complete bipartite graph of 2k vertices. In our experiments on linguistic data however, we observed a linear number of recursive calls for low values of k, and up to a quadratic number for k > 10 –which is way past the point where the kernel becomes ineffective. As an example, Figure 2 shows the zipped kgrams for source and target as a bipartite graph Algorithm 1: Counting perfect matchings 1 CountPerfectMatchings(remS, remT) Data: remS: remaining char. pairs in source remT: remaining char. pairs in target graph: αs1 ⊗αs2 and αt1 ⊗αt2 as a bipartite graph, not added in the arguments to avoid cluttering the recursive calls ruleSet: Γp and Γv Result: Number of rewriting rules matching (αs1, αt1) and (αs2, αt2) 2 if remS == ∅and remT == ∅then 3 return 1; 4 else if remS == ∅then 5 (b1, b2) = remT.first(); 6 if ∃γ ∈Γp | b1 γ≈b2 then 7 return CountPerfectMatchings(∅, remT - {(b1, b2)}); 8 else 9 return 0; 10 else 11 result = 0; 12 (a1, a2) = remS.first(); 13 if ∃γ ∈Γp | a1 γ≈a2 then 14 res += CountPerfectMatchings(remS {(a1, a2)}, remT); 15 for (b1, b2) ∈remT 16 | ∃γ ∈Γv | a1 γ; b1 and a2 γ; b2 do 17 res += CountPerfectMatchings( 18 remS - {(a1, a2)}, 19 remT - {(b1, b2)} 20 ); (s[1], s[1]) (s[k], s[k]) (t[1], t[1]) (t[k], t[k]) (a, a) (b, b') (e1, e2) (f1, f2) (d1, d2) (c1, c2) ... ... ... ... ... ... ... ... Figure 2: Bipartite graph of character pairs, with edges between potential wildcards with 2k vertices and potential wildcard edges. Assuming that vertices (a, a) and (b, b′) have common pattern types, they can be ignored as in lines 7 and 14. (c1, c2) to (f1, f2) however must substitute wildcards in a matching rewriting rule. If we align (c1, c2) with (e1, e2) in line 16, the recursive call will return 0 because the other two pairs cannot be aligned. A valid rule is generated if c’s are paired with f’s and d’s with e’s. This kind of choices is the main source of computational cost. 942 This problem did not arise in the original kb-SRK because of the transitivity of its only type (identity). In type-enriched kb-SRK, wildcard pairing is less constrained. 3.3 Computing Kk Even with an efficient method for computing ¯Kk, implementing Kk directly by applying Equation 3 remains impractical. The main idea is to efficiently compute a reasonably sized set C of elements ((αs1, αt1), (αs2, αt2)) which has the essential property of including all elements such that ¯Kk((αs1, αt1), (αs2, αt2)) ̸= 0. By definition of C, we can compute efficiently Kk((s1, t1), (s2, t2)) = X ((αs1,αs2),(αt1,αt2))∈C ¯Kk((αs1, αt1), (αs2, αt2)) (6) There are a number of ways to do it, with a trade-off between computation time and number of elements in the reduced domain C. The main idea of our own algorithm is that ¯Kk((αs1, αt1), (αs2, αt2)) = 0 if the character pairs (a1, a2) ∈αs1 ⊗αs2 with no common pattern type are not all matched with pairs (b1, b2) ∈ αt1 ⊗αt2 such that a1 γ; b1 and a2 γ; b2 for some γ ∈Γv. This is conversely true for character pairs in αt1 ⊗αt2 with no common pattern type. More simply, character pairs with no common pattern type are mismatched and have to substitute a wildcard in a rewriting rule matching both (αs1, αt1) and (αs2, αt2). But introducing a wildcard on one side of the rule means that there is a matching wildcard on the other side, so we can eliminate k-gram quadruples that do not fill this wildcard inclusion. This filtering can be done efficiently and yields a manageable number of quadruples on which to compute ¯Kk. Algorithm 2 computes a set C to be used in Equation 6 for computing the final value of kernel Kk. In our experiments, it efficiently produces a reasonable number of inputs. All maps in the algorithm are maps to multisets, and multisets are used extensively throughout. Multisets are an extension of sets where elements can appear multiple times, the number of times being called the multiplicity. Typically implemented as hash tables from set elements to integers, they allow for constant-time retrieval of the number of a given element. Union (∪) and intersection (∩) have special definitions on multisets. If 1A(x) is the multiplicity of x in A, we have 1A∪B(x) = max(1A(x), 1B(x)) and 1A∩B(x) = min(1A(x), 1B(x)). Algorithm 2: Computing a set including all elements on which ¯Kk ̸= 0 Data: s1, t1, s2, t2 strings, and k an integer Result: Set C which include all inputs such that ¯Kk ̸= 0 1 Initialize maps ei s→t and maps ei t→s, for i ∈{1, 2}; 2 for i ∈{1, 2} do 3 for a ∈si, b ∈ti | a γ; b, γ ∈Γv do 4 ei s→t[a] += (b, γ); ei t→s[b] += (a, γ); 5 ws→t, aPt = OneWayInclusion(s1, s2, t1, t2, e1 s→t, e2 s→t); 6 wt→s, aPs = OneWayInclusion(t1, t2, s1, s2, e1 t→s, e2 t→s); 7 Initialize multiset res; 8 for (αs1, αs2) ∈aPs do 9 for (αt1, αt2) ∈aPt do 10 res += ((αs1, αs2), (αt1, αt2)); 11 res = res ∪ws→t ∪wt→s.map(swap); 12 return res; 13 14 OneWayInclusion(s1, s2, t1, t2, e1, e2) Initialize map d multisets resWildcards, resAllPatterns; 15 for (αs1, αs2) ∈kgrams(s1) × kgrams(s2) do 16 for (b1, b2) | ∃γ ∈Γv, (a1, a2) ∈ αs1 ⊗αs2, (bi, γ) ∈ei[ai] ∀i ∈{1, 2} do 17 d[(b1, b2)] += (αs1, αs2); 18 for (αt1, αt2) ∈kgrams(t1) × kgrams(t2) do 19 for (b1, b2) ∈αt1 ⊗αt2 | b1 γ ̸= b2∀γ ∈Γp do 20 if compatWkgrms not initialized then 21 Initialize multiset compatWkgrms = d[(b1, b2)]; 22 compatWkgrms = compatWkgrms ∩d[(b1, b2)]; 23 if compatWkgrms not initialized then 24 resAllPatterns += (αt1, αt2); 25 for (αs1, αs2) ∈compatWkgrms do 26 resWildcards+=((αs1, αs2), (αt1, αt2)); 27 return (resWildcards, resAllPatterns); Let us now comment on how the algorithm unfolds. In lines 1 to 4, we index characters in source strings by characters in target strings which have 943 common variable types, and vice versa. It allows in lines 15 to 19 to quickly map a character pair to the set of opposing k-gram pairs with a matching –in the sense of variable types– character pair, i.e. potential aligned wildcards. In lines 20 to 28 we keep only the k-gram quadruples whose wildcard candidates (character pairs with no common pattern) from one side all find matches on the other side. We do not check for the other inclusion, hence the name of the function OneWayInclusion. At line 26, we did not find any character pair with no common pattern, so we save the k-gram pair as ”all-pattern”. All-pattern k-grams will be paired in lines 8 to 10 in the result. Finally, in line 11, we add the union of one-way compatible k-gram quadruples; calling swap on all the pairs of one set is necessary to consistently have sources on the left side and targets on the right side in the result. 4 Experiments 4.1 Systems We experimented on three tasks: paraphrase identification, recognizing textual entailment and answer sentence selection. The setup we used for all experiments was the same save for the few parameters we explored such as: k, and typing scheme. We implemented 2 kernels, kb-SRK, henceforth simply denoted SRK, and the type-enriched kbSRK, denoted TESRK. All sentences were tokenized and POS-tagged using OpenNLP (Morton et al., 2005). Then they were stemmed using the Porter stemmer (Porter, 2001) in the case of SRK. Various other pre-processing steps were applied in the case of TESRK: they are considered as types in the model and are detailed in Table 1. We used LIBSVM (Chang and Lin, 2011) to train a binary SVM classifier on the training data with our two kernels. The default SVM algorithm in LIBSVM uses a parameter C, roughly akin to a regularization parameter. We 10-fold cross-validated this parameter on the training data, optimizing with a grid search for f-score, or MRR for question-answering. All kernels were normalized using ˜K(x, y) = K(x,y) √ K(x,x)√ K(y,y). We denote by ”+” a sum of kernels, with normalizations applied both before and after summing. Following Bu et al. (Bu et al., 2012) experimental setup, we introduced an auxiliary vector kernel denoted PR of features named unigram precision and recall, defined in (Wan et al., 2006). In our experiments a linear kernel seemed to yield the best results. Our Scala implementation of kb-SRKs has an average throughput of about 1500 original kbSRK computations per second, versus 500 typeenriched kb-SRK computations per second on a 8core machine. It typically takes a few hours on a 32-core machine to train, cross-validate and test on a full dataset. Finally, Table 1 presents an overview of our types with how they are defined and implemented. Every type can be used both as a pattern type or as a variable type, but the two roles are different. Pattern types are useful to unify different surface forms of rewriting rules that are semantically equivalent, i.e. having semantically similar patterns. Variable types are useful for when the semantic relation between 2 entities across the same rewriting is more important than the entities themselves. That is why some types in Table 1 are inherently more fitted to be used for one role rather than the other. For example, it is unlikely that replacing a word in a pattern of a rewriting rule by one of its holonyms will yield a semantically similar rewriting rule, so holonym would not be a good pattern type for most applications. On the contrary, it can be very useful in a rewriting rule to type a wildcard link with the relation holonym, as this provides constrained semantic roles to the linked wildcards in the rule, thus holonym would be a good variable type. 4.2 Paraphrase identification Paraphrase identification asks whether two sentences have the same meaning. The dataset we used to evaluate our systems is the MSR Paraphrase Corpus (Dolan and Brockett, 2005), containing 4,076 training pairs of sentences and 1,725 testing pairs. For example, the sentences ”An injured woman co-worker also was hospitalized and was listed in good condition.” and ”A woman was listed in good condition at Memorial’s HealthPark campus, he said.” are paraphrases in this corpus. On the other hand, ”’There are a number of locations in our community, which are essentially vulnerable,’ Mr Ruddock said.” and ”’There are a range of risks which are being seriously examined by competent authorities,’ Mr Ruddock said.” are not paraphrases. We report in Table 2 our best results, the system TESRK + PR, defined by the sum of PR and typed-enriched kb-SRKs with k from 1 to 4, with types Γp = Γv = {stem, synonym}. We observe 944 Type Typing relation on words (a, b) Tool/resources id words have same surface form and tag OpenNLP tagger idMinusTag words have same surface form OpenNLP tokenizer lemma words have same lemma WordNetStemmer stem words have same stem Porter stemmer synonym, antonym words are [type] WordNet hypernym, hyponym b is a [type] of a WordNet entailment, holonym ne a and b are both tagged with the same Named Entity BBN Identifinder lvhsn words are at edit distance of 1 Levenshtein distance Table 1: Types Paraphrase system Accuracy F-score All paraphrase 66.5 79.9 Wan et al. (2006) 75.6 83.0 Bu et al. (2012) 76.3 N/A Socher et al. (2011) 76.8 83.6 Madnani et al. (2012) 77.4 84.1 PR 73.5 82.1 SRK + PR 76.2 83.6 TESRK 76.6 83.7 TESRK + PR 77.2 84.0 Table 2: Evaluation results on MSR Paraphrase that our results are state-of-the-art and in particular, they improve on the orignal kb-SRK by a good margin. We tried other combinations of types but it did not yield good results, this is probably due to the nature of the MSR corpus, which did not contain much more advanced variations from WordNet. The only statistically significant improvement we obtained was between TESRK + PR and our PR baseline (p < 0.05). The performances obtained by all the cited systems and ours are not significantly different in any statistical sense. We made a special effort to try to reproduce as best as we could the original kb-SRK performances (Bu et al., 2012), although our implementation and theirs should theoretically be equivalent. Figure 3 plots the average number of recursive calls to CountPerfectMatchings (algorithm 1) during a kernel computation, as a function of k. Composing with logk, we can observe whether the empiric number of recursive calls is closer to O(k) or O(k2). We conclude that this element of complexity is linear for low values of k, but tends to explode past k = 7. Thankfully, counting common rewriting rules on pairs of 7-to-10-grams rarely yields non-zero results, so in practice using high 0 2 4 6 8 10 1 1.2 1.4 1.6 1.8 2 2.2 k logk(#recursive calls) Figure 3: Evolution of the number of recursive calls to CountPerfectMatchings with k 2 4 6 8 10 0 0.5 1 1.5 2 2.5 k |C| Σsentence lengths Figure 4: Evolution of the size of C with k values of k is not interesting. Figure 4 plots the average size of set C computed by algorithm 2, as a function of k (divided by the sum of lengths of the 4 sentences involved in the kernel computation). We can observe that this 945 RTE system Accuracy All entailments 51.2 Heilman and Smith (2010) 62.8 Bu et al. (2012) 65.1 Zanzotto et al. (2007) 65.8 Hickl et al. (2006) 80.0 PR 61.8 TESRK (All) 62.1 SRK + PR 63.8 TESRK (Syn) + PR 64.1 TESRK (All) + PR 66.1 Table 3: Evaluation results on RTE-3 quantity is small, except for a peak at low values of k, which is not an issue because the computation of ¯Kk is very fast for those values of k. 4.3 Recognizing textual entailment Recognizing Textual Entailment asks whether the meaning of a sentence hypothesis can be inferred by reading a sentence text. The dataset we used to evaluate our systems is RTE-3. Following similar work (Heilman and Smith, 2010; Bu et al., 2012), we took as training data (text, hypothesis) pairs from RTE-1 and RTE-2’s whole datasets and from RTE-3’s training data, which amounts to 3,767 sentence pairs. We tested on RTE-3 testing data containing 800 sentence pairs. For example, a valid textual entailment in this dataset is the pair of sentences ”In a move widely viewed as surprising, the Bank of England raised UK interest rates from 5% to 5.25%, the highest in five years.” and ”UK interest rates went up from 5% to 5.25%.”: the first entails the second. On the other hand, the pair ”Former French president General Charles de Gaulle died in November. More than 6,000 people attended a requiem mass for him at Notre Dame cathedral in Paris.” and ”Charles de Gaulle died in 1970.” does not constitute a textual entailment. We report in Table 3 our best results, the system TESRK (All) + PR, defined by the sum of PR, 1b-SRK and typed-enriched kb-SRKs with k from 2 to 4, with types Γp = {stem, synonym} and Γv = {stem, synonym, hypernym, hyponym, entailment, holonym}. Our results are to be compared with systems using techniques and resources of similar nature, but as reference the top performance at RTE-3 is still reported. This time we did not manage to fully reproduce Bu et al. 2012’s performance, but we observe that type-enriched kb-SRK greatly improves upon our original implementation of kb-SRK and outperforms their system anyway. Combining TESRK and the PR baseline yields significantly better results than either one alone (p < 0.05), and performs significantly better than the system of (Heilman and Smith, 2010), the only one which was evaluated on the same three tasks as us (p < 0.10). We tried with less types in our system TESRK (Syn) + PR by removing all WordNet types but synonyms but got lower performance. This seems to indicate that rich types indeed help capturing more complex sentence rewritings. Note that we needed for k = 1 to replace the type-enriched kb-SRK by the original kernel in the sum, otherwise the performance dropped significantly. Our conclusion is that including richer types is only beneficial if they are captured within a context of a couple of words and that including all those variations on unigrams only add noise. 4.4 Answer sentence selection Answer sentence selection is the problem of selecting among single candidate sentences the ones containing the correct answer to an open-domain factoid question. The dataset we used to evaluate our system on this task was created by (Wang et al., 2007) based on the QA track of past Text REtrieval Conferences (TREC-QA)1. The training set contains 4718 question/answer pairs, for 94 questions, originating from TREC 8 to 12. The testing set contains 1517 pairs for 89 questions. As an example, a correct answer to the question ”What do practitioners of Wicca worship?” is ”An estimated 50,000 Americans practice Wicca, a form of polytheistic nature worship.” On the other hand, the answer candidate ”When people think of Wicca, they think of either Satanism or silly mumbo jumbo.” is incorrect. Sentences with more than 40 words and questions with only positive or only negative answers were filtered out (Yao et al., 2013). The average fraction of correct answers per question is 7.4% for training and 18.7% for testing. Performances are evaluated as for a re-ranking problem, in term of Mean Average Precision (MAP) and Mean Reciprocal Rank (MRR). We report our results in Table 4. We evaluated several combinations of features. IDF word-count (IDF) is a baseline of 1Available at http://nlp.stanford.edu/ mengqiu/data/qg-emnlp07-data.tgz 946 System MAP MRR Random baseline 0.397 0.493 Wang et al. (2007) 0.603 0.685 Heilman and Smith (2010) 0.609 0.692 Wang and Manning (2010) 0.595 0.695 Yao et al. (2013) 0.631 0.748 Yih et al. (2013) LCLR 0.709 0.770 IDF word-count (IDF) 0.596 0.650 SRK 0.609 0.669 SRK + IDF 0.620 0.677 TESRK (WN) 0.642 0.725 TESRK (WN+NE) 0.656 0.744 TESRK (WN) + IDF 0.678 0.759 TESRK (WN+NE) + IDF 0.672 0.768 Table 4: Evaluation results on QA IDF-weighted common word counting, integrated in a linear kernel. Then we implemented SRK and TESRK (with k from 1 to 5) with two typing schemes: WN stands for Γp = {stem, synonym} and Γv = {stem, synonym, hypernym, hyponym, entailment, holonym}, and WN+NE adds type ne to both sets of types. We finally summed our kernels with the IDF baseline kernel. We observe that types which make use of WordNet variations seem to increase the most our performance. Our assumption was that named entities would be useful for question answering and that we could learn associations between question type and answer type through variations: NE does seem to help a little when combined with WN alone, but is less useful once TESRK is combined with our baseline of IDF-weighted common words. Overall, typing capabilities allow TESRK to obtain way better performances than SRK in both MAP and MRR, and our best system combining all our features is comparable to state-of-the-art systems in MRR, and significantly outperforms SRK + IDF, the system without types (p < 0.05). 5 Related work Lodhi et al. (Lodhi et al., 2002) were among the first in NLP to use kernels: they apply string kernels which count common subsequences to text classification. Sentence pair classification however require the capture of 2 types of links: the link between sentences within a pair, and the link between pairs. Zanzotto et al. (Zanzotto et al., 2007) used a kernel method on syntactic tree pairs. They expanded on graph kernels in (Zanzotto et al., 2010). Their method first aligns tree nodes of a pair of sentences to form a single tree with placeholders. They then use tree kernel (Moschitti, 2006) to compute the number of common subtrees of those trees. Bu et al. (Bu et al., 2012) introduced a string rewriting kernel which can capture at once lexical equivalents and common syntactic dependencies on pair of sentences. All these kernel methods require an exact match or assume prior partial matches between words, thus limiting the kind of learned rewriting rules. Our contribution addresses this issue with a typeenriched string rewriting kernel which can account for lexico-semantic variations of words. Limitations of our rewriting rules include the impossibility to skip a pattern word and to replace wildcards by multiple words. Some recent contributions (Chang et al., 2010; Wang and Manning, 2010) also provide a uniform way to learn both intermediary representations and a decision function using potentially rich feature sets. They use heuristics in the joint learning process to reduce the computational cost, while our kernel approach with a simple sequential representation of sentences has the benefit of efficiently computing an exact number of common rewriting rules between rewriting pairs. This in turn allows to precisely fine-tune the shape of desired rewriting rules through the design of the typing scheme. 6 Conclusion We developed a unified kernel-based framework for solving sentence rewriting tasks. Types allow for an increased flexibility in counting common rewriting rules, and can also add a semantic layer to the rewritings. We show that we can efficiently compute a kernel which takes types into account, called type-enriched k-gram bijective string rewriting kernel. A SVM classifier with this kernel yields state-of-the-art results in paraphrase identification and answer sentence selection and outperforms comparable systems in recognizing textual entailment. References Eneko Agirre, Mona Diab, Daniel Cer, and Aitor Gonzalez-Agirre. 2012. Semeval-2012 task 6: A pilot on semantic textual similarity. In Proceedings of the First Joint Conference on Lexical and Computational Semantics-Volume 1: Proceedings of the main conference and the shared task, and Volume 947 2: Proceedings of the Sixth International Workshop on Semantic Evaluation, pages 385–393. Association for Computational Linguistics. Fan Bu, Hang Li, and Xiaoyan Zhu. 2012. String re-writing kernel. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers-Volume 1, pages 449–458. Association for Computational Linguistics. Hiram Calvo, Andrea Segura-Olivares, and Alejandro Garc´ıa. 2014. Dependency vs. constituent based syntactic n-grams in text similarity measures for paraphrase recognition. Computaci´on y Sistemas, 18(3):517–554. Chih-Chung Chang and Chih-Jen Lin. 2011. Libsvm: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27. Ming-Wei Chang, Dan Goldwasser, Dan Roth, and Vivek Srikumar. 2010. Discriminative learning over constrained latent representations. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 429–437. Association for Computational Linguistics. Ido Dagan, Oren Glickman, and Bernardo Magnini. 2006. The pascal recognising textual entailment challenge. In Machine learning challenges. evaluating predictive uncertainty, visual object classification, and recognising tectual entailment, pages 177– 190. Springer. William B Dolan and Chris Brockett. 2005. Automatically constructing a corpus of sentential paraphrases. In Proc. of IWP. Bill Dolan, Chris Quirk, and Chris Brockett. 2004. Unsupervised construction of large paraphrase corpora: Exploiting massively parallel news sources. In Proceedings of the 20th international conference on Computational Linguistics, page 350. Association for Computational Linguistics. Michael Heilman and Noah A Smith. 2010. Tree edit models for recognizing textual entailments, paraphrases, and answers to questions. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 1011–1019. Association for Computational Linguistics. Aminul Islam and Diana Inkpen. 2009. Semantic similarity of short texts. Recent Advances in Natural Language Processing V, 309:227–236. Sergio Jimenez, Claudia Becerra, Alexander Gelbukh, Av Juan Dios B´atiz, and Av Mendiz´abal. 2013. Softcardinality: hierarchical text overlap for student response analysis. In Proceedings of the 2nd joint conference on lexical and computational semantics, volume 2, pages 280–284. Mihai C Lintean and Vasile Rus. 2011. Dissimilarity kernels for paraphrase identification. In FLAIRS Conference. Huma Lodhi, Craig Saunders, John Shawe-Taylor, Nello Cristianini, and Chris Watkins. 2002. Text classification using string kernels. The Journal of Machine Learning Research, 2:419–444. Nitin Madnani and Bonnie J Dorr. 2010. Generating phrasal and sentential paraphrases: A survey of data-driven methods. Computational Linguistics, 36(3):341–387. Nitin Madnani, Joel Tetreault, and Martin Chodorow. 2012. Re-examining machine translation metrics for paraphrase identification. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 182–190. Association for Computational Linguistics. Rada Mihalcea, Courtney Corley, and Carlo Strapparava. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In AAAI, volume 6, pages 775–780. George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39–41. Thomas Morton, Joern Kottmann, Jason Baldridge, and Gann Bierner. 2005. Opennlp: A java-based nlp toolkit. http://opennlp.sourceforge.net. Alessandro Moschitti. 2006. Efficient convolution kernels for dependency and constituent syntactic trees. In Machine Learning: ECML 2006, pages 318–329. Springer. Martin F Porter. 2001. Snowball: A language for stemming algorithms. Richard Socher, Eric H Huang, Jeffrey Pennin, Christopher D Manning, and Andrew Y Ng. 2011. Dynamic pooling and unfolding recursive autoencoders for paraphrase detection. In Advances in Neural Information Processing Systems, pages 801–809. Leslie G Valiant. 1979. The complexity of enumeration and reliability problems. SIAM Journal on Computing, 8(3):410–421. Vladimir Vapnik. 2000. The nature of statistical learning theory. Springer Science & Business Media. Stephen Wan, Mark Dras, Robert Dale, and C´ecile Paris. 2006. Using dependency-based features to take the para-farce out of paraphrase. In Proceedings of the Australasian Language Technology Workshop, volume 2006. Mengqiu Wang and Christopher D Manning. 2010. Probabilistic tree-edit models with structured latent variables for textual entailment and question answering. In Proceedings of the 23rd International Conference on Computational Linguistics, pages 1164– 1172. Association for Computational Linguistics. 948 Mengqiu Wang, Noah A Smith, and Teruko Mitamura. 2007. What is the jeopardy model? a quasisynchronous grammar for qa. In EMNLP-CoNLL, volume 7, pages 22–32. Xuchen Yao, Benjamin Van Durme, Chris CallisonBurch, and Peter Clark. 2013. Answer extraction as sequence tagging with tree edit distance. In HLTNAACL, pages 858–867. Citeseer. Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 26rd International Conference on Computational Linguistics. Association for Computational Linguistics. Fabio Massimo Zanzotto, Marco Pennacchiotti, and Alessandro Moschitti. 2007. Shallow semantics in fast textual entailment rule learners. In Proceedings of the ACL-PASCAL workshop on textual entailment and paraphrasing, pages 72–77. Association for Computational Linguistics. Fabio Massimo Zanzotto, Lorenzo DellArciprete, and Alessandro Moschitti. 2010. Efficient graph kernels for textual entailment recognition. Fundamenta Informaticae. 949
2015
91
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 950–960, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Perceptually grounded selectional preferences Ekaterina Shutova Computer Laboratory University of Cambridge, UK [email protected] Niket Tandon Max Planck Institute for Informatics, Germany [email protected] Gerard de Melo IIIS Tsinghua University, China [email protected] Abstract Selectional preferences (SPs) are widely used in NLP as a rich source of semantic information. While SPs have been traditionally induced from textual data, human lexical acquisition is known to rely on both linguistic and perceptual experience. We present the first SP learning method that simultaneously draws knowledge from text, images and videos, using image and video descriptions to obtain visual features. Our results show that it outperforms linguistic and visual models in isolation, as well as the existing SP induction approaches. 1 Introduction Selectional preferences (SPs) are the semantic constraints that a predicate places onto its arguments. This means that certain classes of entities are more likely to fill the predicate’s argument slot than others. For instance, while the sentences “The authors wrote a new paper.” and “The cat is eating your sausage!” sound natural and describe plausible real-life situations, the sentences “The carrot ate the keys.” and “The law sang a driveway.” appear implausible and difficult to interpret, as the arguments do not satisfy the verbs’ common preferences. SPs provide generalisations about word meaning and use and find a wide range of applications in natural language processing (NLP), including word sense disambiguation (Resnik, 1997; McCarthy and Carroll, 2003; Wagner et al., 2009), resolving ambiguous syntactic attachments (Hindle and Rooth, 1993), semantic role labelling (Gildea and Jurafsky, 2002; Zapirain et al., 2010), natural language inference (Zanzotto et al., 2006; Pantel et al., 2007), and figurative language processing (Fass, 1991; Mason, 2004; Shutova et al., 2013; Li et al., 2013). Automatic acquisition of SPs from linguistic data has thus become an active area of research. The community has investigated a range of techniques to tackle data sparsity and to perform generalisation from observed arguments to their underlying types, including the use of WordNet synsets as SP classes (Resnik, 1993; Li and Abe, 1998; Clark and Weir, 1999; Abney and Light, 1999; Ciaramita and Johnson, 2000), word clustering (Rooth et al., 1999; Bergsma et al., 2008; Sun and Korhonen, 2009), distributional similarity metrics (Erk, 2007; Peirsman and Pad´o, 2010), latent variable models ( ´O S´eaghdha, 2010; Ritter et al., 2010), and neural networks (Van de Cruys, 2014). Little research, however, has been concerned with the sources of knowledge that underlie the learning of SPs. There is ample evidence in cognitive and neurolinguistics that our concept learning and semantic representation are grounded in perception and action (Barsalou, 1999; Glenberg and Kaschak, 2002; Barsalou, 2008; Aziz-Zadeh and Damasio, 2008). This suggests that word meaning and relational knowledge are acquired not only from linguistic input but also from our experiences in the physical world. Multi-modal models of word meaning have thus enjoyed a growing interest in semantics (Bruni et al., 2014), outperforming purely text-based models in tasks such as similarity estimation (Bruni et al., 2014; Kiela et al., 2014), predicting compositionality (Roller and Schulte im Walde, 2013), and concept categorization (Silberer and Lapata, 2014). However, to date these approaches relied on low-level image features such as color histograms or SIFT keypoints to represent the meaning of isolated words. To the best of our knowledge, there has not yet been a multimodal semantic approach performing extraction of 950 predicate-argument relations from visual data. In this paper, we propose the first SP model integrating information about predicate-argument interactions from text, images, and videos. We expect it to outperform purely text-based models of SPs, which suffer from two problems: topic bias and figurative uses of words. Such bias stems from the fact that we typically write about abstract topics and events, resulting in high coverage of abstract senses of words and comparatively lower coverage of the original physical senses (Shutova, 2011). For instance, the verb cut is used predominantly in the domains of economics and finance and its most frequent direct objects are cost and price, according to the British National Corpus (BNC) (Burnard, 2007). Predicate-argument distributions acquired from text thus tend to be skewed in favour of abstract domains and figurative uses, inadequately reflecting our daily experiences with cutting, which guide human acquisition of meaning. Integrating predicate-argument relations observed in the physical world (in the form of image and video descriptions) with the more abstract text-based relations is likely to yield a more realistic semantic model, with real prospects of improving the performance of NLP applications that rely on SPs. We use the BNC as an approximation of linguistic knowledge and a large collection of tagged images and videos from Flickr (www.flickr.com) as an approximation of perceptual knowledge. The human-annotated labels that accompany media on Flickr enable us to acquire predicate-argument cooccurrence information. Our experiments focus on verb preferences for their subjects and direct objects. In summary, our method (1) performs word sense disambiguation and part-of-speech (PoS) tagging of Flickr tag sequences to extract verb-noun co-occurrence; (2) clusters nouns to induce SP classes using linguistic and visual features; (3) quantifies the strength of preference of a verb for a given class by interpolating linguistic and visual SP distributions. We investigate the impact of perceptual information at different levels – from none (purely text-based model) to 100% (purely visual model). We evaluate our model directly against a dataset of human plausibility judgements of verbnoun pairs, as well as in the context of a semantic task: metaphor interpretation. Our results show that the interpolated model combining linguistic and visual relations outperforms the purely linguistic model in both evaluation settings. 2 Related work 2.1 Selectional preference induction The widespread interest in automatic acquisition of SPs was triggered by the work of Resnik (1993), who treated SPs as probability distributions over all potential arguments of a predicate, rather than a single argument class assigned to the predicate. The original study used WordNet to define SP classes and to map the words in the corpus to those classes. Since then, the field has moved toward automatic induction of SP classes from corpus data. Rooth et al. (1999) presented a probabilistic latent variable model of verb preferences. In their approach, verbargument pairs are generated from a latent variable, which represents a cluster of verb-argument interactions. The latent variable distribution and the probabilities that a latent variable generates the verb and the argument are learned from the data using Expectation Maximization (EM). The latent variables enable the model to recognise previously unseen verb-argument pairs. ´O S´eaghdha (2010) and Ritter et al. (2010) similarly model SPs within a latent variable framework, but use Latent Dirichlet Allocation (LDA) to learn the probability distributions, for single-argument and multi-argument preferences respectively. Pad´o et al. (2007) and Erk (2007) used similarity metrics to approximate selectional preference classes. Their underlying hypothesis is that a predicate-argument combination (p, a) is felicitous if the predicate p is frequently observed in the data with the arguments a′ similar to a. The systems compute similarities between distributional representations of arguments in a vector space. Bergsma et al. (2008) trained an SVM classifier to discriminate between felicitous and infelicitous verb-argument pairs. Their training data consisted of observed verb-argument pairs (positive examples) with unobserved, randomly-generated ones (negative examples). They classified nominal arguments of verbs, using their verb co-occurrence probabilities and information about their semantic classes as features. Bergsma and Goebel (2011) extended this method by incorporating image-driven noun features. They extract color and SIFT keypoint features from images found for a particular noun via Google image searches and add them to the feature vectors to classify nouns as felicitous or infelicitous arguments of a given verb. This method is the closest in spirit to ours and the only one so far to investigate the relevance of visual fea951 tures to lexical preference learning. However, our work casts the problem in a different framework: rather than relying on low-level visual properties of nouns in isolation, we explicitly model interactions of predicates and arguments within an image or a video frame. Van de Cruys (2014) recently presented a deep learning approach to SP acquisition. He trained a neural network to discriminate between felicitous and infelicitous arguments using the data constructed of positive (observed) and negative (randomly-generated) examples for training. The network weights were optimized by requiring the model to assign a higher score to an observed pair than to the unobserved one by a given margin. 2.2 Multi-modal methods in semantics Previous work has used multimodal data to determine distributional similarity or to learn multimodal embeddings that project multiple modalities into the same vector space. Some studies rely on extensions of LDA to obtain correlations between words and visual features (Feng and Lapata, 2010; Roller and Schulte im Walde, 2013). Bruni et al. (2012) integrated visual features into distributional similarity models using simple vector concatenation. Instead of generic visual features, Silberer et al. (2013) relied on supervised learning to train 412 higher-level visual attribute classifiers. Applications of multimodal embeddings include zero-shot object detection, i.e. recognizing objects in images without training data for the object class (Socher et al., 2013; Frome et al., 2013; Lazaridou et al., 2014), and automatic generation of image captions (Kulkarni et al., 2013), video descriptions (Rohrbach et al., 2013), or tags (Srivastava et al., 2014). Other applications of multimodal data include language modeling (Kiros et al., 2014) and knowledge mining from images (Chen et al., 2013; Divvala et al., 2014). Young et al. (2014) apply simplification rules to image captions, showing that the resulting hierarchy of mappings between natural language expressions and images can be used for entailment tasks. 3 Experimental data Textual data. We extract linguistic features for our model from the BNC. In particular, we parse the corpus using the RASP parser (Briscoe et al., 2006) and extract subject–verb and verb–object relations from its dependency output. These relations are then used as features for clustering to obtain SP classes, as well as to quantify the strength of association between a particular verb and a particular argument class. Visual data. For the visual features of our model, we mine the Yahoo! Webscope Flickr-100M dataset (Shamma, 2014). Flickr-100M contains 99.3 million images and 0.7 million videos with language tags annotated by users, enabling us to generalise SPs at a large scale. The tags reflect how humans describe objects and actions from a visual perspective. We first stem the tags and remove words that are absent in WordNet (typically named entities and misspellings), then identify their PoS based on their visual context and extract verb–noun cooccurrences. 4 Identifying visual verb-noun co-occurrence In the Flickr-100M dataset, tags are assigned to images and videos in the form of sets of words, rather than grammatically coherent sentences. However, the roles that individual words play are still discernible from their visual context, as manifested by the other words in a given set. In order to identify verbs and nouns co-occurring in the same images, we propose a list sense disambiguation method that first maps each word to a set of possible WordNet senses (accompanied by PoS information) and then performs a joint optimization on the space of candidate word senses, such that their overall similarity is maximized. This amounts to assigning those senses and PoS tags to the words in the set that best fit together. For a given word i and one of its candidate WordNet senses j, we consider an assignment variable xij and compute a sense frequency-based prior for it as Pij = 1 1+R, where R is the WordNet rank of the sense. We then compute a similarity score Sij,i′j′ between all pairs of sense choices for two words i,i′ and their respective candidate senses j,j′. For these, we rely on WordNet’s taxonomic pathbased similarities (Pedersen et al., 2004) in the case of noun-noun sense pairs, the Adapted Lesk similarity measure for adjective-adjective pairs, and finally, WordNet verb-groups and VerbNet class membership (Kipper-Schuler, 2005) for verb-verb pairs. Note that even parts of speech that are disregarded later on can still be helpful at this stage, as we aim at a joint optimization over all words. After the similarities have been obtained for all rel952 evant sense pairs, we maximize the coherence of the senses of the words in the set as an Integer Linear Program, using the Gurobi Optimizer (Gurobi Optimization, 2014) and solving maximize P i Pijxij + P ij P i′j′ Sij,i′j′Bij,i′j′ subject to P j xij ≤1 ∀i, xij ∈{0, 1} ∀i, j, Bij,i′j′ ≤xij, Bij,i′j′ ≤xi′j′, Bij,i′j′ ∈{0, 1} ∀i, j, i′j′. The binary variables Bij,i′j′ are 1 iff xij = 1 and xi′j′ = 1, indicating that both senses were simultaneously chosen. The optimizer disambiguates the input words by selecting sense tuples x1j, x2j, . . . , from which we can directly obtain the corresponding PoS information. Verb-noun co-occurrence information is then extracted from the PoS-tagged sets. 5 Selectional preference model 5.1 Acquisition of argument classes To address the issue of data sparsity, we generalise selectional preferences over argument classes, as opposed to individual arguments. We obtain SP classes by means of spectral clustering of nouns with lexico-syntactic features, which has been shown effective in previous lexical classification tasks (Brew and Schulte im Walde, 2002; Sun and Korhonen, 2009). Spectral clustering partitions the data, relying on a similarity matrix that records similarities between all pairs of data points. We use Jensen-Shannon divergence to measure the similarity between feature vectors for two nouns, wi and wj, defined as follows: dJS(wi, wj) = 1 2dKL(wi||m) + 1 2dKL(wj||m), (1) where dKL is the Kullback-Leibler divergence, and m is the average of wi and wj. We construct the similarity matrix S computing similarities Sij as Sij = exp(−dJS(wi, wj)). The matrix S then encodes a similarity graph G (over our nouns), where Sij are the adjacency weights. The clustering problem can then be defined as identifying the optimal partition, or cut, of the graph into clusters, such that the intra-cluster weights are high and the intercluster weights are low. We use the multiway normalized cut (MNCut) algorithm of Meila and Shi (2001) for this purpose. The algorithm transforms S into a stochastic matrix P containing transition probabilities between the vertices in the graph as P = D−1S, (2) where the degree matrix D is a diagonal matrix with Dii = PN j=1 Sij. It then computes the K leading eigenvectors of P, where K is the desired number of clusters. The graph is partitioned by finding approximately equal elements in the eigenvectors using a simpler clustering algorithm, such as k-means. Meila and Shi (2001) have shown that the partition I derived in this way minimizes the MNCut criterion: MNCut(I) = K X k=1 (1 −P(Ik →Ik|Ik)), (3) which is the sum of transition probabilities across different clusters. Since k-means starts from a random cluster assignment, we run the algorithm multiple times and select the partition that minimizes the cluster distortion, i.e. distances to cluster centroid. We cluster nouns using linguistic and visual features in two independent experiments. Clustering with linguistic features: We first cluster the 2,000 most frequent nouns in the BNC, using their grammatical relations as features. The features consist of verb lemmas appearing in the subject, direct object and indirect object relations with the given nouns in the RASP-parsed BNC, indexed by relation type. The feature vectors are first constructed from the corpus counts, and subsequently normalized by the sum of the feature values. Clustering with visual features: We also cluster the 2,000 most frequent nouns in the Flickr data. Since our goal is to create argument classes for verb preferences, we extract co-occurrence features that map to verb-noun relations from PoSdisambiguated image tags. We use the verb lemmas co-occurring with the noun in the same images and videos as features for clustering. The feature values are again normalised by their sum. SP classes: Example clusters produced using linguistic and visual features are shown in Figures 1 and 2. Our cluster analysis reveals that the imagederived clusters tend to capture scene-like relations (e.g. beach and ocean; guitar and concert), as opposed to types of entities, yielded by the linguistic features and better suited to generalise over 953 desire hostility anxiety passion doubt fear curiosity enthusiasm impulse instinct emotion feeling suspicion official officer inspector journalist detective constable police policeman reporter book statement account draft guide advertisement document report article letter Figure 1: Clusters obtained using linguistic features pilot aircraft plane airline landing flight wing arrival departure airport concert festival music guitar alternative band instrument audience event performance rock benjamin cost benefit crisis debt credit customer consumer Figure 2: Clusters obtained using visual features predicate-argument structure. In addition, the image features tend to be sparse for abstract concepts, reducing both the quality and the coverage of abstract clusters. We thus use the noun clusters derived with linguistic features as an approximation of SP classes. 5.2 Quantifying selectional preferences Once the SP classes have been obtained, we need to quantify the strength of association of a given verb with each of the classes. We adopt an information theoretic measure proposed by Resnik (1993) for this purpose. Resnik first measures selectional preference strength (SPS) of a verb in terms of Kullback-Leibler divergence between the distribution of noun classes occurring as arguments of this verb, p(c|v), and the prior distribution of the noun classes, p(c). SPSR(v) = X c p(c|v) log p(c|v) p(c) , (4) where R is the grammatical relation for which SPs are computed. SPS measures how strongly the predicate constrains its arguments. Selectional association of the verb with a particular argument class is then defined as a relative contribution of that argument class to the overall SPS of the verb. AssR(v, c) = 1 SPSR(v)p(c|v) log p(c|v) p(c) (5) We use this measure to quantify verb SPs based on linguistic and visual co-occurrence information. We first extract verb-subject and verb-direct object relations from the RASP-parsed BNC, map the argument heads to SP classes and quantify selectional association of a given verb with each SP class, thus acquiring its base preferences. Since visual verbnoun co-occurrences do not contain information about grammatical relations, we rely on linguistic data to provide a set of base arguments of the verb for a given grammatical relation. We then interpolate the verb-argument probabilities from linguistic and visual models for the base arguments of the verb, thus preserving information about grammatical relations. 5.3 Linguistic and visual model interpolation We investigate two model interpolation techniques: simple linear interpolation and predicate-driven linear interpolation. Linear interpolation combines information from component models by computing a weighted average of their probabilities. The interpolated probability of an event e is derived as pLI(e) = P i λipi(e), where pi(e) is the probability of e in the model i and λi is the interpolation weight defined such that P i λi = 1; and λi ∈[0, 1]. In our experiments, we interpolate the probabilities p(c) and p(c|v) in the linguistic (LM) and visual (VM) models, as follows: pLI(c) = λLMpLM(c) + λVMpVM(c) (6) pLI(c|v) = λLMpLM(c|v) + λVMpVM(c|v) (7) We experiment with a number of parameter settings for λLM and λVM. Predicate-driven linear interpolation derives predicate-specific interpolation weights directly from the data, as opposed to pre-setting them universally for all verbs. For each predicate v, we compute the interpolation weights based on its prominence in the respective corpus, as follows: λi(v) = reli(v) P k relk(v), (8) where rel is the relevance function of model i for verb v, computed as its relative frequency in the respective corpus: reli(v) = fi(v) P V fi(v). The interpolation weights for LM and VM are then computed as λLM(v) = relLM(v) relLM(v) + relVM(v) (9) λVM(v) = relVM(v) relLM(v) + relVM(v). (10) The motivation for this approach comes from the fact that not all verbs are represented equally well in linguistic and visual data. For instance, while concrete verbs, such as run, push or throw, are more likely to be prominent in visual data, abstract verbs, such as understand or speculate, are best 954 represented in text. Relative linguistic and visual frequencies of a verb provide a way to estimate the relevance of linguistic and visual features to its SP learning. 6 Direct evaluation and data analysis We evaluate the predicate-argument scores assigned by our models against a dataset of human plausibility judgements of verb-direct object pairs collected by Keller and Lapata (2003). Their dataset is balanced with respect to the frequency of verb-argument relations, as well as their plausibility and implausibility, thus creating a realistic SP evaluation task. Keller and Lapata selected 30 predicates and matched each of them to three arguments from different co-occurrence frequency bands according to their BNC counts, e.g. divert attention (high frequency), divert water (medium) and divert fruit (low). This constituted their dataset of Seen verb-noun pairs, 90 in total. Each of the predicates was then also paired with three randomly selected arguments with which it did not occur in the BNC, creating the Unseen dataset. The pairs in both datasets were then rated for their plausibility by 27 human subjects, and their judgements were aggregated into a gold standard. We compare the verb-argument scores generated by our linguistic (LSP), visual (VSP) and interpolated (ISP) SP models against these two datasets in terms of Pearson correlation coefficient, r, and Spearman rank correlation coefficient, ρ. The selectional association score of the cluster to which a given noun belongs is taken to represent the preference score of the verb for this noun. If a noun is not present in our argument clusters, we match it to its nearest cluster, as determined by its distributional similarity to the cluster centroid in terms of Jensen-Shannon divergence. We first compare LSP, VSP and ISP with static and predicate-driven interpolation weights. The results, presented in Table 1, demonstrate that the interpolated model outperforms both LSP and VSP used on their own. The best performance is attained with the static interpolation weights of λLM = 0.8 (r = 0.540; ρ = 0.728) and λLM = 0.9 (r = 0.548; ρ = 0.699). This suggests that while linguistic input plays a crucial role in SP induction (by providing both semantic and syntactic information), visual features further enhance the quality of SPs, as we expected. Figure 3 shows LSP- and VSP-acquired direct object preferences of the verb Seen Unseen r ρ r ρ VSP 0.180 0.126 0.118 0.132 ISP: λLM = 0.1 0.279 0.532 0.220 0.371 ISP: λLM = 0.2 0.349 0.556 0.278 0.411 ISP: λLM = 0.3 0.385 0.558 0.305 0.423 ISP: λLM = 0.4 0.410 0.571 0.320 0.428 ISP: λLM = 0.5 0.448 0.579 0.329 0.430 ISP: λLM = 0.6 0.461 0.591 0.330 0.431 ISP: λLM = 0.7 0.523 0.713 0.335 0.431 ISP: λLM = 0.8 0.540 0.728 0.339 0.430 ISP: λLM = 0.9 0.548 0.699 0.342 0.429 ISP: Predicate-driven 0.476 0.597 0.391 0.551 LSP 0.512 0.688 0.412 0.559 Table 1: Model comparison on the plausibility data of Keller and Lapata (2003) LSP: (1) 0.309 expenditure cost risk expense emission budget spending; (2) 0.201 dividend price rate premium rent rating salary wages; (3) 0.088 employment investment growth supplies sale import export production [..] ISP predicate-driven λLM = 0.65 (1) 0.346 expenditure cost risk expense emission budget spending; (2) 0.211 dividend price rate premium rent rating salary wages; (3) 0.126 tail collar strand skirt trousers hair curtain sleeve VSP: (1) 0.224 tail collar strand skirt trousers hair curtain sleeve; (2) 0.098 expenditure cost risk expense emission budget spending; (3) 0.090 management delivery maintenance transport service housing [..] Figure 3: Top three direct object classes for cut and their association scores, assigned by different models cut, as well as the effects of merging the features in the interpolated model – the verbs’ experiential arguments (e.g. hair or fabric) are emphasized by the visual features. However, the model based on visual features alone performs poorly on the dataset of Keller and Lapata (2003). This is partly explained by the fact that a number of verbs in this dataset are abstract verbs, whose visual representations in the Flickr data are sparse. In addition, VSP (as other visual models used in isolation from text) is not syntaxaware and is unable to discriminate between different types of semantic relations. VSP thus acquires sets of verb-argument relations that are closer in nature to scene descriptions and semantic frames than to lexico-syntactic paradigms. Figure 4 shows the differences between linguistic and visual arguments of the verb kill ranked by LSP and VSP. While LSP produces mainly semantic objects of kill, VSP output contains other types of arguments, such as weapon (instrument) and death (consequence). Taking the argument classes produced by the linguistic model as a basis and then re-ranking 955 LSP: (1) 0.523 girl other woman child person people; (2) 0.164 fleet soldier knight force rebel guard troops crew army pilot; (3) 0.133 sister daughter parent relative lover cousin friend wife mother husband brother father; (4) 0.048 being species sheep animal creature horse baby human fish male lamb bird rabbit [..]; (5) 0.045 victim bull teenager prisoner hero gang enemy rider offender youth killer thief [..] VSP: (1) 0.180 defeat fall death tragedy loss collapse decline [..]; (2) 0.141 girl other woman child person people; (3) 0.128 abuse suicide killing offence murder breach crime; (4) 0.113 handle weapon horn knife blade stick sword [..]; (5) 0.095 victim bull teenager prisoner hero gang enemy rider offender youth killer thief [..] Figure 4: Top five arguments of kill and their association scores, assigned by LSP and VSP (1) 0.442 drink coffee champagne pint wine beer; (2) 0.182 mixture dose substance drug milk cream alcohol chemical [..]; (3) 0.091 girl other woman child person people; (4) 0.053 sister daughter parent relative lover cousin friend wife mother husband brother father; (5) 0.050 drop tear sweat paint blood water juice Figure 5: Error analysis: Mixed subjects and direct objects of drink, assigned by the predicate-driven ISP them to incorporate visual statistics helps to avoid the above problem for the interpolated models, whose output corresponds to grammatical relations. However, static interpolation weights (emphasizing linguistic features over the visual ones for all verbs equally) outperformed the predicate-driven interpolation technique, attaining correlations of r = 0.548 and r = 0.476 respectively. This is mainly due to the fact that some verbs are overrepresented in the visual data (e.g. the predicatedriven interpolation weight for the verb drink is λLM = 0.08). As a result, candidate argument classes (selected based on syntactically-parsed linguistic input) are ranked predominantly based on visual statistics. This makes it possible to emphasize incorrectly parsed arguments (such as subject relations in the direct object SP distribution and vice versa). The predicate-driven ISP output for direct object SPs of drink, for instance, contains a mixture of subject and direct object classes, as shown in Figure 5. Using a static model with a high λLM weight helps to avoid such errors and, therefore, leads to a better performance. In order to investigate the composition of the visual and linguistic datasets, we assess the average level of concreteness of the verbs and nouns present in the datasets. We use the concreteness ratings from the MRC Psycholinguistic Database (Wilson, 1988) for this purpose. In this database, nouns and Figure 6: WordNet top level class distributions for verbs in the visual and textual corpora Seen Unseen r ρ r ρ Rooth et al. (1999)* 0.455 0.487 0.479 0.520 Pad´o et al. (2007)* 0.484 0.490 0.398 0.430 O’Seaghdha (2010) 0.520 0.548 0.564 0.605 VSP 0.180 0.126 0.118 0.132 ISP (best) 0.548 0.699 0.342 0.429 LSP 0.512 0.688 0.412 0.559 Table 2: Comparison to other SP induction methods. * Results reported in O’Seaghdha (2010). verbs are rated for concreteness on a scale from 100 (highly abstract) to 700 (highly concrete). We map the verbs and nouns in our textual and visual corpora to their MRC concreteness scores. We then calculate a dataset-wide concreteness score as an average of the concreteness scores of individual verbs and nouns weighted by their frequency in the respective corpus. The average concreteness scores in the visual dataset were 506.4 (nouns) and 498.1 (verbs). As expected, they are higher than the respective scores in the textual data: 433.1 (nouns) and 363.4 (verbs). In order to compare the types of actions that are common in each of the datasets, we map the verbs to their corresponding top level classes in WordNet. Figure 6 shows the comparison of prominent verb classes in visual and textual data. One can see from the Figure that the visual dataset is well suited for representing motion, perception and contact, while abstract verbs related to e.g. communication, cognition, possession or change are more common in textual data. We also compare the performance of our models to existing SP induction methods: the EM-based clustering method of Rooth et al. (1999), the vector space similarity-based method of Pad´o et al. (2007) and the LDA topic modelling approach of ´O S´eaghdha (2010)1. The best ISP configuration 1Since Rooth et al.’s (1999) and Pad´o et al.’s (2007) models were not originally evaluated on the same dataset, we use the 956 (λLM = 0.9) outperforms all of these methods, as well as our own LSP, on the Seen dataset, confirming the positive contribution of visual features. However, it achieves less success on the Unseen data, where the methods of ´O S´eaghdha (2010) and Rooth et al. (1999) are leading. This result speaks in favour of latent variable models for acquisition of SP estimates for rarely attested predicateargument pairs. In turn, this suggests that integrating our ISP model (that currently outperforms others on more common pairs) with such techniques is likely to improve SP prediction across frequency bands. 7 Task-based evaluation In order to investigate the applicability of perceptually grounded SPs in wider NLP, we evaluate them in the context of an external semantic task – that of metaphor interpretation. Since metaphor is based on transferring imagery and knowledge across domains – typically from more familiar domains of physical experiences to the sphere of vague and elusive abstract thought – metaphor interpretation provides an ideal framework for testing perceptually grounded SPs. Our experiments rely on the metaphor interpretation method of Shutova (2010), in which text-derived SPs are a central component of the system. We replace the SP component with our LSP and ISP (λLM = 0.8) models and compare their performance in the context of metaphor interpretation. Shutova (2010) defined metaphor interpretation as a paraphrasing task, where literal paraphrases for metaphorical expressions are derived from corpus data using a set of statistical measures. For instance, their system interprets the metaphor “a carelessly leaked report” as “a carelessly disclosed report”. Focusing on metaphorical verbs in subject and direct object constructions, Shutova first applies a maximum likelihood model to extract and rank candidate paraphrases for the verb given the context, as follows: P(i, w1, ..., wN) = QN n=1 f(wn, i) (f(i))N−1 · P k f(ik), (11) where f(i) is the frequency of the paraphrase on its own and f(wn, i) the co-occurrence frequency of the paraphrase with the context word wn. This results for their re-implementation reported by O’Seaghdha (2010), who conducted a comprehensive evaluation of SP models on the plausibility data of Keller and Lapata (2003). model favours paraphrases that match the given context best. These candidates are then filtered based on the presence of shared features with the metaphorical verb, as defined by their location and distance in the WordNet hierarchy. All the candidates that have a common hypernym with the metaphorical verb within three levels of the WordNet hierarchy are selected. This results in a set of paraphrases retaining the meaning of the metaphorical verb. However, some of them are still figuratively used. Shutova further applies an SP model to discriminate between figurative and literal paraphrases, treating a strong selectional preference fit as a likely indicator of literalness. The candidates are re-ranked by the SP model, emphasizing the verbs whose preferences the noun in the context matches best. We use LSP and ISP scores to perform this re-ranking step. We evaluate the performance of our models on this task using the metaphor paraphrasing gold standard of Shutova (2010). The dataset consists of 52 verb metaphors and their human-produced literal paraphrases. Following Shutova, we evaluate the performance in terms of mean average precision (MAP), which measures the ranking quality of GS paraphrases across the dataset. MAP is defined as follows: MAP = 1 M M X j=1 1 Nj Nj X i=1 Pji, where M is the number of metaphorical expressions, Nj is the number of correct paraphrases for the metaphorical expression j, Pji is the precision at each correct paraphrase (the number of correct paraphrases among the top i ranks). As compared to the gold standard, ISP attains a MAP score of 0.65, outperforming both the LSP (MAP = 0.62) and the original system of Shutova (2010) (MAP = 0.62), demonstrating the positive contribution of visual features. 8 Conclusion We have presented the first SP induction method that simultaneously draws knowledge from text, images and videos. Our experiments show that it outperforms linguistic and visual models in isolation, as well as the previous approaches to SP learning. We believe that this model has a wide applicability in NLP, where many systems already rely on automatically induced SPs. It can also benefit image caption generation systems, which 957 typically focus on objects rather than actions, by providing information about predicate-argument structure. In the future, it would be interesting to derive the information about predicate-argument relations from low-level visual features directly. However, to our knowledge, reliably mapping images to actions (i.e. verbs) at a large-scale is still a challenging task. Human-annotated image and video descriptions allow us to investigate what types of verb– noun relations are in principle present in the visual data and the ways in which they are different from the ones found in text. Our results show that visual data is better suited for capturing physical properties of concepts as well as containing relations not explicitly described in text. The presented interpolation techniques are also applicable outside multi-modal semantics. For instance, they can be generalised to acquire SPs from unbalanced corpora of different sizes (e.g. for languages lacking balanced corpora) or to perform domain adaptation of SPs. In the future, we would like to apply SP interpolation to multilingual SP learning, i.e. integrating data from multiple languages for more accurate SP induction and projecting universal semantic relations to low-resource languages. It is also interesting to investigate SP learning at the level of semantic predicates (e.g. automatically inducing FrameNet-style frames), where combining the visual and linguistic knowledge is likely to outperform text-based models on their own. Acknowledgements Ekaterina Shutova’s research is funded by the University of Cambridge and the Leverhulme Trust Early Career Fellowship. Gerard de Melo’s work is funded by China 973 Program Grants 2011CBA00300, 2011CBA00301, and NSFC Grants 61033001, 61361136003, 61450110088. We are grateful to the ACL reviewers for their insightful feedback. References Steven Abney and Marc Light. 1999. Hiding a Semantic Hierarchy in a Markov Model. In Proceedings of the Workshop on Unsupervised Learning in Natural Language Processing, ACL, pages 1–8. Lisa Aziz-Zadeh and Antonio Damasio. 2008. Embodied semantics for actions: Findings from functional brain imaging. Journal of Physiology – Paris, 102(13). Lawrence W. Barsalou. 1999. Perceptual symbol systems. Behavioral and Brain Sciences, 22(4):577– 609. Lawrence W. Barsalou. 2008. Grounded cognition. Annual Review of Psychology, 59(1):617–645. Shane Bergsma and Randy Goebel. 2011. Using visual information to predict lexical preference. In Proceedings of RANLP. Shane Bergsma, Dekang Lin, and Randy Goebel. 2008. Discriminative learning of selectional preference from unlabeled text. In Proceedings of EMNLP 2008, EMNLP ’08, pages 59–68, Honolulu, Hawaii. Chris Brew and Sabine Schulte im Walde. 2002. Spectral clustering for German verbs. In Proceedings of EMNLP, pages 117–124. Ted Briscoe, John Carroll, and Rebecca Watson. 2006. The second release of the RASP system. In Proceedings of the COLING/ACL on Interactive presentation sessions, pages 77–80. Elia Bruni, Gemma Boleda, Marco Baroni, and Nam Khanh Tran. 2012. Distributional semantics in Technicolor. In Proceedings of ACL 2012, pages 136–145, Jeju Island, Korea, July. ACL. Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. Journal of Artificial Intelligence Research, 49:1–47. Lou Burnard. 2007. Reference Guide for the British National Corpus (XML Edition). Xinlei Chen, Abhinav Shrivastava, and Abhinav Gupta. 2013. NEIL: Extracting Visual Knowledge from Web Data. In Proceedings of ICCV 2013. Massimiliano Ciaramita and Mark Johnson. 2000. Explaining away ambiguity: Learning verb selectional preference with Bayesian networks. In Proceedings of COLING 2000, pages 187–193. Stephen Clark and David Weir. 1999. An iterative approach to estimating frequencies over a semantic hierarchy. In Proceedings of EMNLP/VLC 1999, pages 258–265. Santosh Divvala, Ali Farhadi, and Carlos Guestrin. 2014. Learning everything about anything: Weblysupervised visual concept learning. In Proceedings of CVPR 2014. Katrin Erk. 2007. A simple, similarity-based model for selectional preferences. In Proceedings of ACL 2007. Dan Fass. 1991. met*: A method for discriminating metonymy and metaphor by computer. Computational Linguistics, 17(1):49–90. 958 Yansong Feng and Mirella Lapata. 2010. Visual information in semantic representation. In Proceedings of NAACL 2010, pages 91–99. ACL. Andrea Frome, Greg Corrado, Jon Shlens, Samy Bengio, Jeffrey Dean, Marc’Aurelio Ranzato, and Tomas Mikolov. 2013. DeViSE: A deep visualsemantic embedding model. In Proceedings of NIPS 2013. Daniel Gildea and Daniel Jurafsky. 2002. Automatic labeling of semantic roles. Computational Linguistics, 28. Arthur M. Glenberg and Michael P. Kaschak. 2002. Grounding language in action. Psychonomic Bulletin and Review, pages 558–565. Gurobi Optimization. 2014. Gurobi optimizer reference manual, version 5.6. Houston, TX, USA. Donald Hindle and Mats Rooth. 1993. Structural ambiguity and lexical relations. Computational Linguistics, 19:103–120. Frank Keller and Mirella Lapata. 2003. Using the web to obtain frequencies for unseen bigrams. Computational Linguistics, 29(3):459–484. Douwe Kiela, Felix Hill, Anna Korhonen, and Stephen Clark. 2014. Improving multi-modal representations using image dispersion: Why less is sometimes more. In Proceedings of ACL 2014, Baltimore, Maryland. Karin Kipper-Schuler. 2005. VerbNet: A broadcoverage, comprehensive verb lexicon. Ph.D. thesis, University of Pennsylvania, PA. Ryan Kiros, Ruslan Salakhutdinov, and Richard S. Zemel. 2014. Multimodal neural language models. In Proceedings of ICML 2014, pages 595–603. Girish Kulkarni, Visruth Premraj, Vicente Ordonez, Sagnik Dhar, Siming Li, Yejin Choi, Alexander C. Berg, and Tamara L. Berg. 2013. Babytalk: Understanding and generating simple image descriptions. IEEE Trans. Pattern Anal. Mach. Intell., 35(12):2891–2903. Angeliki Lazaridou, Elia Bruni, and Marco Baroni. 2014. Is this a wampimuk? cross-modal mapping between distributional semantics and the visual world. In Proceedings of ACL 2014, pages 1403– 1414. ACL. Hang Li and Naoki Abe. 1998. Generalizing case frames using a thesaurus and the mdl principle. Computational Linguistics, 24(2):217–244. Hongsong Li, Kenny Q. Zhu, and Haixun Wang. 2013. Data-driven metaphor recognition and explanation. Transactions of the Association for Computational Linguistics, 1:379–390. Zachary Mason. 2004. Cormet: a computational, corpus-based conventional metaphor extraction system. Computational Linguistics, 30(1):23–44. Diana McCarthy and John Carroll. 2003. Disambiguating nouns, verbs, and adjectives using automatically acquired selectional preferences. Computational Linguistics, 29(4):639–654. Marina Meila and Jianbo Shi. 2001. A random walks view of spectral segmentation. In Proceedings of AISTATS. Diarmuid ´O S´eaghdha. 2010. Latent variable models of selectional preference. In Proceedings of ACL 2010. Sebastian Pad´o, Ulrike Pad´o, and Katrin Erk. 2007. Flexible, corpus-based modelling of human plausibility judgements. In Proceedings of EMNLPCoNLL. P. Pantel, R. Bhagat, T. Chklovski, and E. Hovy. 2007. Isp: Learning inferential selectional preferences. In Proceedings of NAACL 2007. Ted Pedersen, Siddharth Patwardhan, and Jason Michelizzi. 2004. Wordnet:: Similarity: measuring the relatedness of concepts. In Demonstration Papers at HLT-NAACL 2004, pages 38–41. Y. Peirsman and S. Pad´o. 2010. Cross-lingual induction of selectional preferences with bilingual vector spaces. In Proceedings of NAACL 2010, pages 921– 929. Philip Resnik. 1993. Selection and information: A class-based approach to lexical relationships. Technical report, University of Pennsylvania. Philip Resnik. 1997. Selectional preference and sense disambiguation. In ACL SIGLEX Workshop on Tagging Text with Lexical Semantics, Washington, D.C. Alan Ritter, Mausam Etzioni, and Oren Etzioni. 2010. A latent dirichlet allocation method for selectional preferences. In Proceedings ACL 2010, pages 424– 434. Marcus Rohrbach, Wei Qiu, Ivan Titov, Stefan Thater, Manfred Pinkal, and Bernt Schiele. 2013. Translating video content to natural language descriptions. In Proceedings of ICCV 2013. Stephen Roller and Sabine Schulte im Walde. 2013. A Multimodal LDA Model integrating Textual, Cognitive and Visual Modalities. In Proceedings of EMNLP 2013, pages 1146–1157, Seattle, WA. Mats Rooth, Stefan Riezler, Detlef Prescher, Glenn Carroll, and Franz Beil. 1999. Inducing a semantically annotated lexicon via EM-based clustering. In Proceedings of ACL 1999, pages 104–111. David Shamma. 2014. One hundred million Creative Commons Flickr images for research. http: //labs.yahoo.com/news/yfcc100m/. 959 Ekaterina Shutova, Simone Teufel, and Anna Korhonen. 2013. Statistical Metaphor Processing. Computational Linguistics, 39(2). Ekaterina Shutova. 2010. Automatic metaphor interpretation as a paraphrasing task. In Proceedings of NAACL 2010, pages 1029–1037, Los Angeles, USA. Ekaterina Shutova. 2011. Computational Approaches to Figurative Language. Ph.D. thesis, University of Cambridge, UK. Carina Silberer and Mirella Lapata. 2014. Learning grounded meaning representations with autoencoders. In Proceedings of ACL 2014, Baltimore, Maryland. Carina Silberer, Vittorio Ferrari, and Mirella Lapata. 2013. Models of semantic representation with visual attributes. In Proceedings of ACL 2013, pages 572–582. Richard Socher, Milind Ganjoo, Christopher D. Manning, and Andrew Ng. 2013. Zero-shot learning through cross-modal transfer. In Proceedings of NIPS 2013, pages 935–943. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Research, 15:1929–1958. Lin Sun and Anna Korhonen. 2009. Improving verb clustering with automatically acquired selectional preferences. In Proceedings of EMNLP 2009. Tim Van de Cruys. 2014. A neural network approach to selectional preference acquisition. In Proceedings of EMNLP 2014. Wiebke Wagner, Helmut Schmid, and Sabine Schulte Im Walde. 2009. Verb sense disambiguation using a predicate-argument clustering model. In Proceedings of the CogSci Workshop on Semantic Space Models (DISCO). M.D. Wilson. 1988. The MRC Psycholinguistic Database: Machine Readable Dictionary, Version 2. Behavioural Research Methods, Instruments and Computers, 20:6–11. Peter Young, Alice Lai, Micah Hodosh, and Julia Hockenmaier. 2014. From image descriptions to visual denotations. Transactions of the Association of Computational Linguistics – Volume 2, Issue 1, pages 67–78. Fabio Massimo Zanzotto, Marco Pennacchiotti, and Maria Teresa Pazienza. 2006. Discovering asymmetric entailment relations between verbs using selectional preferences. In Proceedings of COLING/ACL, pages 849–856. Be˜nat Zapirain, Eneko Agirre, Llu´ıs M`arquez, and Mihai Surdeanu. 2010. Improving semantic role classification with selectional preferences. In Proceedings of NAACL HLT 2010, pages 373–376. 960
2015
92
Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 961–970, Beijing, China, July 26-31, 2015. c⃝2015 Association for Computational Linguistics Joint Case Argument Identification for Japanese Predicate Argument Structure Analysis Hiroki Ouchi Hiroyuki Shindo Kevin Duh Yuji Matsumoto Graduate School of Information and Science Nara Institute of Science and Technology 8916-5, Takayama, Ikoma, Nara, 630-0192, Japan { ouchi.hiroki.nt6, shindo, kevinduh, matsu }@is.naist.jp Abstract Existing methods for Japanese predicate argument structure (PAS) analysis identify case arguments of each predicate without considering interactions between the target PAS and others in a sentence. However, the argument structures of the predicates in a sentence are semantically related to each other. This paper proposes new methods for Japanese PAS analysis to jointly identify case arguments of all predicates in a sentence by (1) modeling multiple PAS interactions with a bipartite graph and (2) approximately searching optimal PAS combinations. Performing experiments on the NAIST Text Corpus, we demonstrate that our joint analysis methods substantially outperform a strong baseline and are comparable to previous work. 1 Introduction Predicate argument structure (PAS) analysis is a shallow semantic parsing task that identifies basic semantic units of a sentence, such as who does what to whom, which is similar to semantic role labeling (SRL)1. In Japanese PAS analysis, one of the most problematic issues is that arguments are often omitted in the surface form, resulting in so-called zeropronouns. Consider the sentence of Figure 1. 1We use “PAS analysis” in this paper following previous work on Japanese PAS analysis. Figure 1: An example of Japanese PAS. The English translation is “Because ϕi caught a cold, Ii skipped school.”. The upper edges are dependency relations, and the under edges are case arguments. “NOM” and “ACC” represents the nominative and accusative arguments, respectively. “ϕi” is a zeropronoun, referring to the antecedent “watashii”. The case role label “NOM” and “ACC” respectively represents the nominative and accusative roles, and ϕi represents a zero-pronoun. There are two predicates “hiita (caught)” and “yasunda (skipped)”. For the predicate “yasunda (skipped)”, “watashii-wa (Ii)” is the “skipper”, and “gakko-wo (school)” is the “entity skipped”. It is easy to identify these arguments, since syntactic dependency between an argument and its predicate is a strong clue. On the other hand, the nominative argument of the predicate “hiita (caught)” is “watashii-wa (Ii)”, and this identification is more difficult because of the lack of the direct syntactic dependency with “hiita (caught)”. The original nominative argument appears as a zero-pronoun, so that we have to explore the antecedent, an element referred to by a zero-pronoun, as the argument. As the example sentence shows, we cannot use effective syntactic information for identifying such arguments. This type of arguments is known as implicit arguments, a very problematic language 961 phenomenon for PAS analysis (Gerber and Chai, 2010; Laparra and Rigau, 2013). Previous work on Japanese PAS analysis attempted to solve this problem by identifying arguments per predicate without considering interactions between multiple predicates and arguments (Taira et al., 2008; Imamura et al., 2009). However, implicit arguments are likely to be shared by semantically-related predicates. In the above example (Figure 1), the implicit argument of the predicate “hiita (caught)” is shared by the other predicate “yasunda (skipped)” as its nominative argument “watashii (Ii)”. Based on this intuition, we propose methods to jointly identify optimal case arguments of all predicates in a sentence taking their interactions into account. We represent the interactions as a bipartite graph that covers all predicates and candidate arguments in a sentence, and factorize the whole relation into the second-order relations. This interaction modeling results in a hard combinatorial problem because it is required to select the optimal PAS combination from all possible PAS combinations in a sentence. To solve this issue, we extend the randomized hill-climbing algorithm (Zhang et al., 2014) to search all possible PAS in the space of bipartite graphs. We perform experiments on the NAIST Text Corpus (Iida et al., 2007), a standard benchmark for Japanese PAS analysis. Experimental results show that compared with a strong baseline, our methods achieve an improvement of 1.0-1.2 points in F-measure for total case argument identification, and especially improve performance for implicit argument identification by 2.0-2.5 points. In addition, although we exploit no external resources, we get comparable results to previous work exploiting large-scale external resources (Taira et al., 2008; Imamura et al., 2009; Sasano and Kurohashi, 2011). These results suggest that there is potential for more improvement by adding external resources. The main contributions of this work are: (1) We present new methods to jointly identify case arguments of all predicates in a sentence. (2) We propose global feature templates that capture interactions over multiple PAS. (3) Performing experiments on the NAIST Text Corpus, we demonstrate our methods are superior to a strong baseline and comparable to the methods of representative previous work. 2 Japanese Predicate Argument Structure Analysis 2.1 Task Overview In Japanese PAS analysis, we identify arguments taking part in the three major case roles, nominative (NOM), accusative (ACC) and dative (DAT) cases, for each predicate. Case arguments can be divided into three categories according to the positions relative to their predicates (Hayashibe et al., 2011): Dep: The arguments that have direct syntactic dependency with the predicate. Zero: The implicit arguments whose antecedents appear in the same sentence and have no direct syntactic dependency with the predicate. Inter-Zero: The implicit arguments whose antecedents do not appear in the same sentence. For example, in Figure 1, the accusative argument “gakko-wo (school)” of the predicate “yasunda (skipped)” is regarded as Dep, and the nominative argument “watashii-wa (I)” (the antecedent of zero-pronoun “ϕi”) of the predicate “hiita (caught)” is Zero. In this paper, we focus on the analysis for intrasentential arguments (Dep and Zero). In order to identify inter-sentential arguments (Inter-Zero), it is required to search a much broader space, such as the whole document, resulting in a much harder analysis than intra-sentential arguments.2 Therefore, we believe that quite different approaches are necessary to realize an inter-sentential PAS analysis with high accuracy, and leave it for future work. 2.2 Related Work For Japanese PAS analysis research, the NAIST Text Corpus has been used as a standard benchmark (Iida et al., 2007). One of the representative researches using the NAIST Text Corpus is Imamura et al. (2009). They built three distinct models corresponding to the three case roles by extracting features defined on each pair of a predicate and a candidate argument. Using each model, they select the best candidate argument for each case per predicate. Their models are based on maximum entropy model and can easily incorporate various features, resulting in high accuracy. 2Around 10-20% in F measure has been achieved in previous work (Taira et al., 2008; Imamura et al., 2009; Sasano and Kurohashi, 2011). 962 Figure 2: Intuitive image of a predicate-argument graph. This graph is factorized into the local and global features. The different line color/style indicate different cases. While in Imamura et al. (2009) one case argument is identified at a time per predicate, the method proposed by Sasano and Kurohashi (2011) simultaneously determines all the three case arguments per predicate by exploiting large-scale case frames obtained from large raw texts. They focus on identification of implicit arguments (Zero and Inter-Zero), and achieves comparable results to Imamura et al. (2009). In these approaches, case arguments were identified per predicate without considering interactions between multiple predicates and candidate arguments in a sentence. In the semantic role labeling (SRL) task, Yang and Zong (2014) pointed out that information of different predicates and their candidate arguments could help each other for identifying arguments taking part in semantic roles. They exploited a reranking method to capture the interactions between multiple predicates and candidate arguments, and jointly determine argument structures of all predicates in a sentence (Yang and Zong, 2014). In this paper, we propose new joint analysis methods for identifying case arguments of all predicates in a sentence capturing interactions between multiple predicates and candidate arguments. 3 Graph-Based Joint Models 3.1 A Predicate-Argument Graph We define predicate argument relations by exploiting a bipartite graph, illustrated in Figure 2. The nodes of the graph consist of two disjoint sets: the left one is a set of candidate arguments and the right one is a set of predicates. In this paper, we call it a predicate-argument (PA) graph. Each predicate node has three distinct edges corresponding to nominative (NOM), accusative (ACC), and dative (DAT) cases. Each edge with a case role label joins a candidate argument node with a predicate node, which represents a case argument of a predicate. For instance, in Figure 2 a1 is the nominative argument of p1, and a3 is the accusative argument of p2. Formally, a PA graph is a bipartite graph ⟨A, P, E⟩, where A is the node set consisting of candidate arguments, P the node set consisting of predicates, and E the set of edges subject to that there is exactly one edge e with a case role label c outgoing from each of the predicate nodes p to a candidate argument node a. A PA graph is defined as follows: A = {a1, ..., an, an+1 = NULL} P = {p1, ..., pm} E = {⟨a, p, c⟩| deg(p, c) = 1, ∀a ∈A, ∀p ∈P, ∀c ∈C } where deg(p, c) is the number of edges with a case role c outgoing from p, and C is the case role label set. We add a dummy node an+1, which is defined for the cases where the predicate requires no case argument or the required case argument does not appear in the sentence. An edge e ∈E is represented by a tuple ⟨a, p, c⟩, indicating the edge with a case role c joining a candidate argument node a and a predicate node p. An admissible PA graph satisfies the constraint deg(p, c) = 1, representing that each predicate node p has only one edge with a case role c. To identify the whole PAS for a sentence x, we predict the PA graph with an edge set corresponding to the correct PAS from the admissible PA graph set G(x) based on a score associated with a PA graph y as follows: ˜y = argmax y∈G(x) Score(x, y) A scoring function Score(x, y) receives a sentence x and a candidate graph y as its input, and returns a scalar value. In this paper, we propose two scoring functions as analysis models based on different assumptions: (1) Per-Case Joint Model assumes the interaction between multiple predicates (predicate interaction) and the independence between case roles, and (2) All-Cases Joint Model assumes the interaction between case roles (case interaction) as well as the predicate interaction. 963 3.2 Per-Case Joint Model Per-Case Joint Model assumes that different case roles are independent from each other. However, for each case, interactions between multiple predicates are considered jointly. We define the score of a PA graph y to be the sum of the scores for each case role c of the set of the case roles C: Scoreper(x, y) = ∑ c∈C Scorec(x, y) (1) The scores for each case role are defined as the dot products between a weight vector θc and a feature vector ϕc(x, E(y, c)): Scorec(x, y) = θc · ϕc(x, E(y, c)) (2) where E(y, c) is the edge set associated with a case role c in the candidate graph y, and the feature vector is defined on the edge set. The edge set E(y, c) in the equation (2) is utilized for the two types of features, the local features and global features, inspired by (Huang, 2008), defined as follows: θc · ϕc(x, E(y, c)) = ∑ e∈E(y,c) θc ϕl(x, e) + θc ϕg(x, E(y, c)) (3) where ϕl(x, e) denotes the local feature vector, and ϕg(x, E(y, c)) the global feature vector. The local feature vector ϕl(x, e) is defined on each edge e in the edge set E(y, c) and a sentence x, which captures a predicate-argument pair. Consider the example of Figure 2. For Per-Case Joint Model, we use edges, ea1p1, ea1p2, and ea2p3, as local features to compute the score of the edge set with the nominative case. In addition, the global feature vector ϕg(x, E(y, c)) is defined on the edge set E(y, c), and enables the model to utilize linguistically richer information over multiple predicate-argument pairs. In this paper, we exploit second-order relations, similar to the second-order edge factorization of dependency trees (McDonald and Pereira, 2006). We make a set of edge pairs Epair by combining two edges ei, ej in the edge set E(y, c), as follows: Epair = { {ei, ej} | ∀ei, ej ∈E(y, c), ei ̸= ej } For instance, in the PA graph in Figure 2, to compute the score of the nominative arguments, we make three edge pairs: {{ea1p1, ea1p2}, {ea1p1, ea2p3}, {ea1p2, ea2p3}} Then, features are extracted from these edge pairs and utilized for the score computation. For the accusative and dative cases, their scores are computed in the same manner. Then, we obtain the resulting score of the PA graph by summing up the scores of the local and global features. If we do not consider the global features, the model reduces to a per-case local model similar to previous work (Imamura et al., 2009). 3.3 All-Cases Joint Model While Per-Case Joint Model assumes the predicate interaction with the independence between case roles, All-Cases Joint Model assumes the case interaction together with the predicate interaction. Our graph-based formulation is very flexible and easily enables the extension of Per-Case Joint Model to All-Cases Joint Model. Therefore, we extend Per-Case Joint Model to All-Cases Joint Model to capture the interactions between predicates and all case arguments in a sentence. We define the score of a PA graph y based on the local and global features as follows: Scoreall(x, y) = ∑ e∈E(y) θ · ϕl(x, e) + θ · ϕg(x, E(y)) (4) where E(y) is the edge set associated with all the case roles on the candidate graph y, ϕl(x, e) is the local feature vector defined on each edge e in the edge set E(y), and ϕg(x, E(y)) is the global feature vector defined on the edge set E(y). Consider the PA graph in Figure 2. The local features are extracted from each edge: Nominative : ea1p1, ea1p2, ea2p3 Accusative : ea2p1, ea3p2, ea3p3 Dative : ea3p1, ea4p2, ea4p3 For the global features, we make a set of edge pairs Epair by combining two edges ei, ej in the edge set E(y), like Per-Case Joint Model. However, in the All-Cases Joint Model, the global features may involve different cases (i.e. mixing edges with different case roles). For the PA graph in Figure 2, we make the edge pairs {ea1p1, ea2p1}, {ea3p1, ea1p2}, {ea3p2, ea4p3}, and so on. From these edge pairs, we extract information as global features to compute a graph score. 964 Structure Name Description Diff-Arg PAIR ⟨pi.rf ◦pj.rf ◦pi.vo ◦pj.vo ⟩, ⟨ai.ax ◦ai.rp ◦pi.ax ◦pi.vo ⟩, ⟨aj.ax ◦aj.rp ◦pj.ax ◦pj.vo ⟩ TRIANGLE ⟨ai.ax ◦ai.ax ◦ai.rp ◦aj.rp ◦pi.ax ◦pi.vo ⟩, ⟨ai.ax ◦aj.ax ◦ai.rp ◦aj.rp ◦pj.ax ◦pj.vo ⟩, QUAD ⟨ai.ax ◦aj.ax ◦ai.rp ◦aj.rp ◦pi.vo ◦pj.vo ⟩ ⟨ai.ax ◦aj.ax ◦pi.ax ◦pj.ax ◦ai.rp ◦aj.rp ◦pi.vo ◦pj.vo ⟩ ⟨ai.ax ◦aj.ax ◦pi.rf ◦pj.rf ◦ai.rp ◦ai.rp ◦pi.vo ◦pi.vo ⟩ Co-Arg BI-PREDS ⟨ai.rp ◦pi.rf ◦pj.rf ⟩, ⟨ai.ax ◦ai.rp ◦pi.rf ◦pj.rf ⟩ DEP-REL ⟨ai.ax ◦ai.rp ◦pi.ax ◦pj.ax ◦pi.vo ◦pj.vo ◦(x, y).dep ⟩ if x depends on y for x,y in (pi,pj), (ai,pi), (ai,pj), (pi,ai), (pj,ai) Table 1: Global feature templates. pi, pj is a predicate, ai is the argument connected with pi, and aj is the argument connected with pj. Feature conjunction is indicated by ◦; ax=auxiliary, rp=relative position, vo=voice, rf=regular form, dep=dependency. All the features are conjoined with the relative position and the case role labels of the two predicates. 4 Global Features Features are extracted based on feature templates, which are functions that draw information from the given entity. For instance, one feature template ϕ100 = a.ax ◦p.vo is a conjunction of two atomic features a.ax and p.vo, representing an auxiliary word attached to a candidate argument (a.ax) and the voice of a predicate (p.vo). We design several feature templates for characterizing each specific PA graph. Consider the PA graph constructed from the sentence in Figure 1, and a candidate argument “kaze-wo (a cold)” and a predicate “hiita (caught)” are connected with an edge. To characterize the graph, we draw some linguistic information associated with the edge. Since the auxiliary word attached to the candidate argument is “wo” and the voice of the predicate is “active”, the above feature template ϕ100 will generate a feature instance as follows. (a.ax = wo) ◦(p.vo = active) Such features are utilized for the local and global features in the joint models. We propose the global feature templates that capture multiple PAS interactions based on the Diff-Arg and Co-Arg structures, depicted in the right part of Figure 1. The Diff-Arg structure represents that the two predicates have different candidate arguments, and the Co-Arg structure represents that the two predicates share the same candidate argument. Based on these structures, we define the global feature templates that receive a pair of edges in a PA graph as input and return a feature vector, shown in Table 1. 4.1 Diff-Arg Features The feature templates based on the Diff-Arg structure are three types: PAIR (a pair of predicateargument relation), TRIANGLE (a predicate and its two arguments relation), and QUAD (two predicate-argument relations). PAIR These feature templates denote where the target argument is located relative to another argument and the two predicates in the Diff-Arg structure. We combine the relative position information (rp) with the auxiliary words (ax) and the voice of the two predicates (vo). TRIANGLE This type of feature templates captures the interactions between three elements: two candidate arguments and a predicate. Like the PAIR feature templates, we encode the relative position information of two candidate arguments and a predicate with the auxiliary words and voice. QUAD When we judge if a candidate argument takes part in a case role of a predicate, it would be beneficial to grasp information of another predicate-argument pair. The QUAD feature templates capture the mutual relation between four elements: two candidate arguments and predicates. We encode the relative position information, the auxiliary words, and the voice. 4.2 Co-Arg Features To identify predicates that take implicit (Zero) arguments, we set two feature types, BI-PREDS and DEP-REL, based on the Co-Arg structure. BI-PREDS For identifying an implicit argu965 Input: the set of cases to be analyzed C, parameter θc, sentence x Output: a locally optimal PA graph ˜y 1: Sample a PA graph y(0) from G(x) 2: t ←0 3: for each case c ∈C do 4: repeat 5: Yc ←NeighborG(y(t), c) ∪y(t) 6: y(t+1) ←argmax y∈Yc θc · ϕc(x, E(y, c)) 7: t ←t + 1 8: until y(t) = y(t+1) 9: end for 10: return ˜y ←y(t) Figure 3: Hill-Climbing for Per-Case Joint Model Input: the set of cases to be analyzed C, parameter θ, sentence x Output: a locally optimal PA graph ˜y 1: Sample a PA graph y(0) from G(x) 2: t ←0 3: repeat 4: Y ←NeighborG(y(t)) ∪y(t) 5: y(t+1) ←argmax y∈Y θ · ϕ(x, E(y)) 6: t ←t + 1 7: until y(t) = y(t+1) 8: return ˜y ←y(t) Figure 4: Hill-Climbing for All-Cases Joint Model ment of a predicate, information of another semantically-related predicate in the sentence could be effective. We utilize bi-grams of the regular forms (rf) of the two predicates in the Co-Arg structure to capture the predicates that are likely to share the same argument in the sentence. DEP-REL We set five distinct feature templates to capture dependency relations (dep) between the shared argument and the two predicates. If two elements have a direct dependency relation, we encode its dependency relation with the auxiliary words and the voice. 5 Inference and Training 5.1 Inference for the Joint Models Global features make the inference of finding the maximum scoring PA graph more difficult. For searching the graph with the highest score, we propose two greedy search algorithms by extending the randomized hill-climbing algorithm proposed in (Zhang et al., 2014), which has been shown to achieve the state-of-the-art performance in dependency parsing. Figure 3 describes the pseudo code of our proposed algorithm for Per-Case Joint Model. Firstly, we set an initial PA graph y(0) sampled uniformly from the set of admissible PA graphs G(x) (line 1 in Figure 3). Then, the union Yc is constructed from the set of neighboring graphs with a case NeighborG(y(t), c), which is a set of admissible graphs obtained by changing one edge with the case c in y(t), and the current graph y(t) (line 5). The current graph y(t) is updated to a higher scoring graph y(t+1) selected from the union Yc (line 6). The algorithm continues until no more score improvement is possible by changing an edge with the case c in y(t) (line 8). This repetition is executed for other case roles in the same manner. As a result, we can get a locally optimal graph ˜y. Figure 4 describes the pseudo code of the algorithm for All-Cases Joint Model. The large part of the algorithm is the same as that for Per-Case Joint Model. The difference is that the union Y consists of the current graph y(t) and the neighboring graph set obtained by changing one edge in y(t) regardless of case roles (line 4 in Figure 4), and that the iteration process for each case role (line 3 in Figure 3) is removed. The algorithm also continues until no more score improvement is possible by changing an edge in y(t), resulting in a locally optimal graph ˜y. Following Zhang et al. (2014), for a given sentence x, we repeatedly run these algorithms with K consecutive restarts. Each run starts with initial graphs randomly sampled from the set of admissible PA graphs G(x), so that we obtain K local optimal graphs by K restarts. Then the highest scoring one of K graphs is selected for the sentence x as the result. Each run of the algorithms is independent from each other, so that multiple runs are easily executable in parallel. 5.2 Training Given a training data set D = {(ˆx, ˆy)}N i , the weight vectors θ (θc) in the scoring functions of the joint models are estimated by using machine learning techniques. We adopt averaged perceptron (Collins, 2002) with a max-margin technique: 966 ∀i ∈{1, ..., N}, y ∈G(xi), Score(ˆxi, ˆyi) ≥Score(ˆxi, y) + ∥ˆyi −y∥1 −ξi where ξi ≥0 is the slack variable and ∥ˆyi −y∥1 is the Hamming distance between the gold PA graph ˆyi and a candidate PA graph y of the admissible PA graphs G(xi). Following Zhang et al. (2014), we select the highest scoring graph ˜y as follows: TRAIN : ˜y = argmax y∈G(ˆxi) {Score(ˆxi, y)+∥ˆyi−y∥1} TEST : ˜y = argmax y∈G(x) {Score(x, y)} Using the weight vector tuned by the training, we perform analysis on a sentence x in the test set. 6 Experiment 6.1 Experimental Settings Data Set We evaluate our proposed methods on the NAIST Text Corpus 1.5, which consists of 40,000 sentences of Japanese newspaper text (Iida et al., 2007). While previous work has adopted the version 1.4 beta, we adopt the latest version. The major difference between version 1.4 beta and 1.5 is revision of dative case (corresponding to Japanese case particle “ni”). In 1.4 beta, most of adjunct usages of “ni” are mixed up with the argument usages of “ni”, making the identification of dative cases seemingly easy. Therefore, our results are not directly comparable with previous work. We adopt standard train/dev/test split (Taira et al., 2008) as follows: Train Articles: Jan 1-11, Editorials: Jan-Aug Dev Articles: Jan 12-13, Editorials: Sept Test Articles: Jan 14-17, Editorials: Oct-Dec We exclude inter-sentential arguments (InterZero) in our experiments. Our features make use of the annotated POS tags, phrase boundaries, and dependency relations annotated in the NAIST Text Corpus. We do not use any external resources. Baseline We adopt the pointwise method (using only local features) proposed by Imamura et al. (2009) as the baseline. They built three distinct models corresponding to the three case roles. By using each model, they estimate the likelihood that each candidate argument plays a case role of the target predicate as a score, and independently select the highest scoring one per predicate. feature Dep Zero Total PC Joint local 84.59 42.55 77.89 + global 85.51 44.54 78.85 AC Joint local 84.17 41.33 77.43 + global 85.92 44.45 79.17 Table 2: Global vs Local features on the development sets in F-measures. “PC Joint” denotes the Per-Case Joint Model, and “AC Joint” denotes the All-Cases Joint Model. Features The baseline utilizes the Baseline Features used in Imamura et al. (2009) and Grammatical features used in Hayashibe et al. (2009), as the “Local Features”. In addition, the joint models utilize the “Global Features” in Table 1. Implementation Details For our joint models with hill-climbing, we report the average performance across ten independent runs with 10 restarts, which almost reaches convergence 3. We train the baseline and our joint models for 20 iterations with averaged perceptron. 6.2 Results Local Features vs Global Features Table 2 shows the effectiveness of the global features on the development sets. We incrementally add the global features to the both models that utilize only the local features. The results show that the global features improve the performance by about 1.0 point in F-measures in total. For and are particularly beneficial to the implicit (Zero) argument identification (an improvement of 1.99 points in Per-Case Joint Model and 3.12 points in All-Cases Joint Model). Pointwise Methods vs Joint Methods Table 3 presents the F-measures of the baseline and our joint methods on the test set of the NAIST Text Corpus. We used the bootstrap resampling method as the significance test. In most of the metrics, our proposed joint methods outperform the baseline pointwise method. Note that since PerCase Joint Model yields better results compared with the baseline, capturing the predicate interaction is beneficial to Japanese PAS analysis. In addition, the joint methods achieve a considerable improvement of 2.0-2.5 points in F-measure for 3Performance did not change when increasing the number of restarts 967 Case Type # of Args. Baseline PC Joint AC Joint NOM Dep 14055 86.50 87.54 † 88.13 † ‡ Zero 4935 45.56 47.62 48.11 Total 18990 77.31 78.39 † 79.03 † ‡ ACC Dep 9473 92.84 ⋆ 93.09 † ⋆ 92.74 Zero 833 21.38 22.73 24.43 Total 10306 88.86 ⋆ 89.00 † ⋆ 88.47 DAT Dep 2518 30.97 34.29 † 38.39 † ‡ Zero 239 0.83 0.83 4.80 Total 2757 29.02 32.20 † 36.35 † ‡ ALL Dep 26046 85.06 85.79 † 86.07 † ‡ Zero 6007 41.65 43.60 44.09 Total 32053 78.15 78.91 † 79.23 † ‡ Table 3: F-measures of the three methods in the test sets. The bold values denote the highest F-measures among all the three methods. Statistical significance with p < 0.05 is marked with † compared with Baseline, ‡ compared with PC Joint, and ⋆compared with AC Joint. Dep Zero NOM ACC DAT NOM ACC DAT TA08 75.53 88.20 89.51 30.15 11.41 3.66 IM09 87.0 93.9 80.8 50.0 30.8 0.0 S&K11 39.5 17.5 8.9 PC Joint 87.54 93.09 34.19 47.62 22.73 0.83 AC Joint 88.13 92.74 38.39 48.11 24.44 4.80 Table 4: Comparison with previous work using the NAIST Text Corpus in F-measure. TA08 is Taira et al. (2008), IM09 is Imamura et al. (2009), and S&K11 is Sasano & Kurohashi (2011). Their results are not directly comparable to ours since they use external resources and the NAIST Text Corpus 1.4 beta. the implicit arguments (Zero), one of the problematic issues in Japanese PAS analysis. Comparing the joint methods, each of our two joint methods is effective for a different case role. Per-Case Joint Model is better at the ACC case, and All-Cases Joint Model is better at the NOM and DAT cases. One of the possible explanations is that the distribution of ACC cases is different from NOM cases. While the ratio of Dep and Zero arguments for ACC cases is 90:10, the ratio for NOM cases is 75:25. This might have some negative effects on the ACC case identification with AllCases Joint Model. However, in total, All-Cases Joint Model achieves significantly better results. This suggests that capturing case interactions improves performance of Japanese PAS analysis. Existing Methods vs Joint Methods To compare our proposed methods with previous work, we pick the three pieces of representative previous work exploiting the NAIST Text Corpus: Taira et al. (2008) (TA08), Imamura et al. (2009) (IM09), and Sasano and Kurohashi (2011) (S&K11). Sasano and Kurohashi (2011) focus on the analysis for the Zero and Inter-Zero arguments, and do not report the results on the Dep arguments. With respect to the Dep arguments, the All-Cases Joint Model achieves the best result for the NOM cases, Imamura et al. (2009) the best for the ACC cases, and Taira et al. (2008) the best for the DAT cases. In terms of the Zero arguments, Imamura et al. (2009) is the best for the NOM and ACC cases, and Sasano and Kurohashi (2011) the best for the DAT cases. Our joint methods achieve high performance comparable to Imamura et al. (2009). However, because they used additional external resources and a different version of the NAIST Text Corpus, the results of previous work are not directly comparable to ours. Our research direction and contributions are orthogonal to theirs, and adding their external resources could potentially leads to much better results. 968 7 Conclusion We have presented joint methods for Japanese PAS analysis, which model interactions between multiple predicates and arguments using a bipartite graph and greedily search the optimal PAS combination in a sentence. Experimental results shows that capturing the predicate interaction and case interaction is effective for Japanese PAS analysis. In particular, implicit (Zero) argument identification, one of the problematic issues in Japanese PAS analysis, is improved by taking such interactions into account. Since this framework is applicable to the argument classification in SRL, applying our methods to that task is an interesting line of the future research. In addition, the final results of our joint methods are comparable to representative existing methods despite using no external resources. For future work, we plan to incorporate external resources for our joint methods. Acknowledgments We are grateful to the anonymous reviewers. This work is partially supported by a JSPS KAKENHI Grant Number 26730121 and 15K16053. References Michael Collins. 2002. Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1– 8, Philadelphia, July. Association for Computational Linguistics. Matthew Gerber and Joyce Chai. 2010. Beyond nombank: A study of implicit arguments for nominal predicates. In Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, pages 1583–1592, Uppsala, Sweden, July. Association for Computational Linguistics. Yuta Hayashibe, Mamoru Komachi, and Yuji Matsumoto. 2011. Japanese predicate argument structure analysis exploiting argument position and type. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 201–209, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Liang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of 46th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 586–594, Columbus, Ohio, June. Association for Computational Linguistics. Ryu Iida, Mamoru Komachi, Kentaro Inui, and Yuji Matsumoto. 2007. Annotating a japanese text corpus with predicate-argument and coreference relations. In Proceedings of the Linguistic Annotation Workshop, pages 132–139, Prague, Czech Republic, June. Association for Computational Linguistics. Kenji Imamura, Kuniko Saito, and Tomoko Izumi. 2009. Discriminative approach to predicateargument structure analysis with zero-anaphora resolution. In Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and 4th International Joint Conference on Natural Language Processing, pages 85–88, Suntec, Singapore, August. Association for Computational Linguistics. Egoitz Laparra and German Rigau. 2013. Impar: A deterministic algorithm for implicit semantic role labelling. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 1180–1189, Sofia, Bulgaria, August. Association for Computational Linguistics. Ryan McDonald and Fernando Pereira. 2006. Online learning of approximate dependency parsing algorithms. In Proceedings of the 11th conference on European Chapter of the Association for Computational Linguistics (EACL), pages 81–88, Trento, Italy, April. Association for Computational Linguistics. 969 Ryohei Sasano and Sadao Kurohashi. 2011. A discriminative approach to japanese zero anaphora resolution with large-scale lexicalized case frames. In Proceedings of 5th International Joint Conference on Natural Language Processing, pages 758–766, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing. Hirotoshi Taira, Sanae Fujita, and Masaaki Nagata. 2008. A japanese predicate argument structure analysis using decision lists. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 523–532, Honolulu, Hawaii, October. Association for Computational Linguistics. Haitong Yang and Chengqing Zong. 2014. Multipredicate semantic role labeling. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 363– 373, Doha, Qatar, October. Association for Computational Linguistics. Yuan Zhang, Tao Lei, Regina Barzilay, and Tommi Jaakkola. 2014. Greed is good if randomized: New inference for dependency parsing. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 1013– 1024, Doha, Qatar, October. Association for Computational Linguistics. 970
2015
93